content
stringlengths
633
9.91k
label
stringclasses
7 values
category
stringclasses
7 values
dataset
stringclasses
1 value
node_id
int64
0
2.71k
split
stringclasses
3 values
1-hop neighbor's text information: On the convergence of stochastic iterative dynamic programming algorithms. : This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program. Michael I. Jordan is a NSF Presidential Young Investigator. 1-hop neighbor's text information: Generalization in reinforcement learning: Safely approximating the value function. : To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization. 1-hop neighbor's text information: Learning to Act using Real- Time Dynamic Programming. : fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526). Target text information: Issues in using function approximation for reinforcement learning. : Reinforcement learning techniques address the problem of learning to select actions in unknown, dynamic environments. It is widely acknowledged that to be of use in complex domains, reinforcement learning techniques must be combined with generalizing function approximation methods such as artificial neural networks. Little, however, is understood about the theoretical properties of such combinations, and many researchers have encountered failures in practice. In this paper we identify a prime source of such failuresnamely, a systematic overestimation of utility values. Using Watkins' Q-Learning [18] as an example, we give a theoretical account of the phenomenon, deriving conditions under which one may expected it to cause learning to fail. Employing some of the most popular function approximators, we present experimental results which support the theoretical findings. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,183
train
1-hop neighbor's text information: Globally convergent inexact Newton methods, : We propose an algorithm for solving systems of monotone equations which combines Newton, proximal point, and projection methodologies. An important property of the algorithm is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions. Moreover, under standard assumptions the local su-perlinear rate of convergence is achieved. As opposed to classical globalization strategies for Newton methods, for computing the stepsize we do not use line-search aimed at decreasing the value of some merit function. Instead, linesearch in the approximate Newton direction is used to construct an appropriate hy-perplane which separates the current iterate from the solution set. This step is followed by projecting the current iterate onto this hyperplane, which ensures global convergence of the algorithm. Computational cost of each iteration of our method is of the same order as that of the classical damped Newton method. The crucial advantage is that our method is truly globally convergent. In particular, it cannot get trapped in a stationary point of a merit function. The presented algorithm is motivated by the hybrid projection-proximal point method proposed in [25]. Target text information: A hybrid projection proximal point algorithm. : We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allows a nice geometric interpretation, and is somewhat more intuitive than the classical proof. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,737
test
1-hop neighbor's text information: Hidden Markov Modeling of simultaneously recorded cells in the Associative cortex of behaving monkeys: A widely held idea regarding information processing in the brain is the cell-assembly hypothesis suggested by Hebb in 1949. According to this hypothesis, the basic unit of information processing in the brain is an assembly of cells, which can act briefly as a closed system, in response to a specific stimulus. This work presents a novel method of characterizing this supposed activity using a Hidden Markov Model. This model is able to reveal some of the underlying cortical network activity of behavioral processes. In our study the process in hand was the simultaneous activity of several cells recorded from the frontal cortex of behaving monkeys. Using such a model we were able to identify the behavioral mode of the animal and directly identify the corresponding collective network activity. Furthermore, the segmentation of the data into the discrete states also provides direct evidence for the state dependency of the short-time correlation functions between the same pair of cells. Thus, this cross-correlation depends on the network state of activity and not on local connectivity alone. Target text information: Cortical activity flips among quasi-stationary states. : M. Abeles, H. Bergman and E. Vaadia, School of Medicine and Center for Neural Computation Hebrew University, POB 12272, Jerusalem 91120, Is-rael. E. Seidemann and I. Meilijson, School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, and School of Medicine, Tel Aviv University, 69978 Tel Aviv, Israel. I. Gat and N. Tishby, Institute of Computer Science and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,172
test
1-hop neighbor's text information: : A General Result on the Stabilization of Linear Systems Using Bounded Controls 1 ABSTRACT We present two constructions of controllers that globally stabilize linear systems subject to control saturation. We allow essentially arbitrary saturation functions. The only conditions imposed on the system are the obvious necessary ones, namely that no eigenvalues of the uncontrolled system have positive real part and that the standard stabilizability rank condition hold. One of the constructions is in terms of a "neural-network type" one-hidden layer architecture, while the other one is in terms of cascades of linear maps and saturations. 1-hop neighbor's text information: j: So applying Corollary 4.3 to the second equation in (47), we conclude that From (38), we then get jg(y n + ~ k( y (51), we obtain jy n + ~ k( From (39) we see that the right-hand side of (54) is bounded by . Since the system _ y = A 1 y k( y )b 2 jyj ev N : (55) Now, suppose lim sup t!1 jy(t)j = > 0. Then jyj ev 2. Since j k(y)j Ljyj, we have and using (56) and (57), we obtain j~yj ev 2(~-1 + -2 )L + -2 ffi : (58) (Note that if the right-hand side of (58) is 1 , then the inequality is trivial since we know from (52) that j~yj ev 1 .) From (53), (56), and (58), we have -2 ffi + N ffi > N . However, from (55) we see that (60) still holds. So we established (60) in all cases. From (40) we then get jyj ev 2 Taking the lim sup t!1 of the left-hand side of (61), we have 1 2 + N("-2 + 1)ffi i.e. 2 N("-2 + 1)ffi. Substituting this into (58) and (61), we get j~yj ev ffi, and jyj ev 2 N("-2 + 1)ffi . So, if we take N = 2 N("-2 + 1)(1 + 2L(~-1 + -2 )) + -2 ; the conclusion follows. To complete the proof, we need to deal with the general case of m > 1 inputs. This is done by induction on m, as in the proof in [14], and will be omitted here. 2 [1] Fuller, A.T., "In the large stability of relay and saturated control systems with linear controllers," Int. J. Control, 10(1969): 457-480. [2] Gutman, P-O., and P. Hagander, "A new design of constrained controllers for linear systems," IEEE Transactions on Automat. Contr. AC-30(1985): 22-23. [3] Kosut, R.L., "Design of linear systems with saturating linear control and bounded states," IEEE Trans. Au-tom. Control AC-28(1983): 121-124. [4] Krikelis, N.J., and S.K. Barkas, "Design of tracking systems subject to actuator saturation and integrator wind-up," Int. J. Control 39(1984): 667-682. [5] Schmitendorf, W.E. and B.R. Barmish, "Null controllability of linear systems with constrained controls," SIAM J. Control and Opt. 18(1980): 327-345. [6] Slemrod, M., "Feedback stabilization of a linear control system in Hilbert space," Math. Control Signals Systems 2(1989): 265-285. [7] Slotine, J-J.E., and W. Li, Applied Nonlinear Control, Prentice-Hall, Englewood Cliffs, 1991. [8] Sontag, E.D., "An algebraic approach to bounded controllability of linear systems," Int. J. Control 39(1984): 181-188. [9] Sontag, E.D., "Remarks on stabilization and input-to-state stability," Proc. IEEE CDC, Tampa, Dec. 1989, IEEE Publications, 1989, pp. 1376-1378. [10] Sontag, E.D., Mathematical Control Theory: Deterministic Finite Dimensional Systems, Springer, New York, 1990. [11] Sontag, E.D., and H.J. Sussmann, "Nonlinear output feedback design for linear systems with saturating controls," Proc. IEEE CDC, Honolulu, Dec. 1990, IEEE Publications, 1990, pp. 3414-3416. [12] Sussmann, H. J. and Y. Yang, "On the stabilizability of multiple integrators by means of bounded feedback controls" Proc. IEEE CDC, Brighton, UK, Dec. 1991, IEEE Publications, 1991: 70-73. [13] Teel, A.R., "Global stabilization and restricted tracking for multiple integrators with bounded controls," Systems and Control Letters 18(1992): 165-171. [14] Yang, Y., H.J. Sussmann and E.D. Sontag, "Stabilization of linear systems with bounded controls," Proc. June 1992 NOLCOS, Bordeaux, (M. Fliess, Ed.), IFAC Publications, pp. 15-20. [15] Yang, Y., Global Stabilization of Linear Systems with Bounded Feedback, Ph. D. Thesis, Mathematics Department, Rutgers University, 1993. j~yj ev M (~-1 + -2 ) + ffi-2 ; (50) 1-hop neighbor's text information: "Stabilization with saturated actuators, a worked example: F-8 longitudinal flight control," : The authors and coworkers recently proved general theorems on the global stabilization of linear systems subject to control saturation. This paper develops in detail an explicit design for the linearized equations of longitudinal flight control for an F-8 aircraft, and tests the obtained controller on the original nonlinear model. This paper represents the first detailed derivation of a controller using the techniques in question, and the results are very encouraging. Target text information: Global stabilization of linear systems with bounded feedback. : This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as parallel connections ("single hidden layer neural networks") of simple saturation functions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,502
test
1-hop neighbor's text information: A Bayesian approach to learning Bayesian networks with local structure. : In this paper we examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability tables (CPTs), that quantify these networks. This increases the space of possible models, enabling the representation of CPTs with a variable number of parameters that depends on the learned local structures. The resulting learning procedure is capable of inducing models that better emulate the real complexity of the interactions present in the data. We describe the theoretical foundations and practical aspects of learning local structures, as well as an empirical evaluation of the proposed method. This evaluation indicates that learning curves characterizing the procedure that exploits the local structure converge faster than these of the standard procedure. Our results also show that networks learned with local structure tend to be more complex (in terms of arcs), yet require less parameters. 1-hop neighbor's text information: A tutorial on learning Bayesian networks. : Technical Report MSR-TR-95-06 Target text information: On the sample complexity of learning Bayesian networks. : In recent years there has been an increasing interest in learning Bayesian networks from data. One of the most effective methods for learning such networks is based on the minimum description length (MDL) principle. Previous work has shown that this learning procedure is asymptotically successful: with probability one, it will converge to the target distribution, given a sufficient number of samples. However, the rate of this convergence has been hitherto unknown. In this work we examine the sample complexity of MDL based learning procedures for Bayesian networks. We show that the number of samples needed to learn an *-close approximation (in terms of entropy distance) with confidence ffi is O * ) 3 log 1 ffi log log 1 . This means that the sample complexity is a low-order polynomial in the error threshold and sub-linear in the confidence bound. We also discuss how the constants in this term depend on the complexity of the target distribution. Finally, we address questions of asymptotic minimality and propose a method for using the sample complexity results to speed up the learning process. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,213
test
1-hop neighbor's text information: Paying attention to the right things: Issues of focus in case-based creative design. : Case-based reasoning can be used to explain many creative design processes, since much creativity stems from using old solutions in novel ways. To understand the role cases play, we conducted an exploratory study of a seven-week student creative design project. This paper discusses the observations we made and the issues that arise in understanding and modeling creative design processes. We found particularly interesting the role of imagery in reminding and in evaluating design options. This included visualization, mental simulation, gesturing, and even sound effects. An important class of issues we repeatedly encounter in our modeling efforts concerns the focus of the designer. (For example, which problem constraints should be reformulated? Which evaluative issues should be raised?) Cases help to address these focus issues. 1-hop neighbor's text information: Towards more creative case-based design systems. : Case-based reasoning (CBR) has a great deal to offer in supporting creative design, particularly processes that rely heavily on previous design experience, such as framing the problem and evaluating design alternatives. However, most existing CBR systems are not living up to their potential. They tend to adapt and reuse old solutions in routine ways, producing robust but uninspired results. Little research effort has been directed towards the kinds of situation assessment, evaluation, and assimilation processes that facilitate the exploration of ideas and the elaboration and redefinition of problems that are crucial to creative design. Also, their typically rigid control structures do not facilitate the kinds of strategic control and opportunism inherent in creative reasoning. In this paper, we describe the types of behavior we would like case-based design systems to support, based on a study of designers working on a mechanical engineering problem. We show how the standard CBR framework should be extended and we describe an architecture we are developing to experiment with these ideas. 1 1-hop neighbor's text information: An investigation of marker-passing algorithms for analogue retrieval. : If analogy and case-based reasoning systems are to scale up to very large case bases, it is important to analyze the various methods used for retrieving analogues to identify the features of the problem for which they are appropriate. This paper reports on one such analysis, a comparison of retrieval by marker passing or spreading activation in a semantic network with Knowledge-Directed Spreading Activation, a method developed to be well-suited for retrieving semantically distant analogues from a large knowledge base. The analysis has two complementary components: (1) a theoretical model of the retrieval time based on a number of problem characteristics, and (2) experiments showing how the retrieval time of the approaches varies with the knowledge base size. These two components, taken together, suggest that KDSA is more likely than SA to be able to scale up to retrieval in large knowledge bases. Target text information: Finding analogues for innovative design. : Knowledge Systems Laboratory March 1995 Report No. KSL 95-32 I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,767
test
1-hop neighbor's text information: Least-Squares Temporal Difference Learning: Submitted to NIPS-98 TD() is a popular family of algorithms for approximate policy evaluation in large MDPs. TD() works by incrementally updating the value function after each observed transition. It has two major drawbacks: it makes inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto [5] eliminates all stepsize parameters and improves data efficiency. This paper extends Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from = 0 to arbitrary values of ; at the extreme of = 1, the resulting algorithm is shown to be a practical formulation of supervised linear regression. Third, it presents a novel, intuitive interpretation of LSTD as a model-based reinforcement learning technique. Target text information: A comparison of direct and model-based reinforcement learning. : This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We find that in this task model-based approaches support reinforcement learning from smaller amounts of training data and efficient handling of changing goals. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,455
test
1-hop neighbor's text information: Instructions: Paper and BibTeX entry are available at http://www.complang.tuwien.ac.at/papers/. This paper was published in: Compiler Construction (CC '94), Springer LNCS 786, 1994, pages 158-171 Delayed Exceptions | Speculative Execution of Abstract. Superscalar processors, which execute basic blocks sequentially, cannot use much instruction level parallelism. Speculative execution has been proposed to execute basic blocks in parallel. A pure software approach suffers from low performance, because exception-generating instructions cannot be executed speculatively. We propose delayed exceptions, a combination of hardware and compiler extensions that can provide high performance and correct exception handling in compiler-based speculative execution. Delayed exceptions exploit the fact that exceptions are rare. The compiler assumes the typical case (no exceptions), schedules the code accordingly, and inserts run-time checks and fix-up code that ensure correct execution when exceptions do happen. 1-hop neighbor's text information: Control Flow Prediction for Dynamic ILP Processors. : We introduce a technique to enhance the ability of dynamic ILP processors to exploit (speculatively executed) parallelism. Existing branch prediction mechanisms used to establish a dynamic window from which ILP can be extracted are limited in their abilities to: (i) create a large, accurate dynamic window, (ii) initiate a large number of instructions into this window in every cycle, and (iii) traverse multiple branches of the control flow graph per prediction. We introduce control flow prediction which uses information in the control flow graph of a program to overcome these limitations. We discuss how information present in the control flow graph can be represented using multiblocks, and conveyed to the hardware using Control Flow Tables and Control Flow Prediction Buffers. We evaluate the potential of control flow prediction on an abstract machine and on a dynamic ILP processing model. Our results indicate that control flow prediction is a powerful and effective assist to the hardware in making more informed run time decisions about program control flow. 1-hop neighbor's text information: "A Framework for Statistical Modeling of Superscalar Processor Performance".Proc. : The current trace-driven simulation approach to determine superscalar processor performance is widely used but has some shortcomings. Modern benchmarks generate extremely long traces, resulting in problems with data storage, as well as very long simulation run times. More fundamentally, simulation generally does not provide significant insight into the factors that determine performance or a characterization of their interactions. This paper proposes a theoretical model of superscalar processor performance that addresses these shortcomings. Performance is viewed as an interaction of program parallelism and machine parallelism. Both program and machine parallelisms are decomposed into multiple component functions. Methods for measuring or computing these functions are described. The functions are combined to provide a model of the interaction between program and machine parallelisms and an accurate estimate of the performance. The computed performance, based on this model, is compared to simulated performance for six benchmarks from the SPEC 92 suite on several configurations of the IBM RS/6000 instruction set architecture. Target text information: Limits of control flow on parallelism. : This paper discusses three techniques useful in relaxing the constraints imposed by control flow on parallelism: control dependence analysis, executing multiple flows of control simultaneously, and speculative execution. We evaluate these techniques by using trace simulations to find the limits of parallelism for machines that employ different combinations of these techniques. We have three major results. First, local regions of code have limited parallelism, and control dependence analysis is useful in extracting global parallelism from different parts of a program. Second, a superscalar processor is fundamentally limited because it cannot execute independent regions of code concurrently. Higher performance can be obtained with machines, such as multiprocessors and dataflow machines, that can simultaneously follow multiple flows of control. Finally, without speculative execution to allow instructions to execute before their control dependences are resolved, only modest amounts of parallelism can be obtained for programs with complex control flow. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,589
test
1-hop neighbor's text information: Competitive environments evolve better solutions for complex tasks. : 1-hop neighbor's text information: Finding opponents worth beating: Methods for competitive co-evolution. : We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites." Competitive coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. "Competitive fitness sharing" changes the way fitness is measured, "shared sampling" provides a method for selecting a strong, diverse set of parasites, and the "hall of fame" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift. 1-hop neighbor's text information: Tackling the boolean even n parity problem with genetic programming and limited-error fitness. : This paper presents Limited Error Fitness (LEF), a modification to the standard supervised learning approach in Genetic Programming (GP), in which an individual's fitness score is based on how many cases remain uncovered in the ordered training set after the individual exceeds an error limit. The training set order and the error limit are both altered dynamically in response to the performance of the fittest individual in the previous generation. Target text information: Small populations over many generations can beat large populations over few generations in genetic programming. : This paper looks at the use of small populations in Genetic Programming (GP), where the trend in the literature appears to be towards using as large a population as possible, which requires more memory resources and CPU-usage is less efficient. Dynamic Subset Selection (DSS) and Limited Error Fitness (LEF) are two different, adaptive variations of the standard supervised learning method used in GP. This paper compares the performance of GP, GP+DSS, and GP+LEF, on a 958 case classification problem, using a small population size of 50. A similar comparison between GP and GP+DSS is done on a larger and messier 3772 case classification problem. For both problems, GP+DSS with the small population size consistently produces a better answer using fewer tree evaluations than other runs using much larger populations. Even standard GP can be seen to perform well with the much smaller population size, indicating that it is certainly worth an exploratory run or three with a small population size before assuming that a large population size is necessary. It is an interesting notion that smaller can mean faster and better. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,093
test
1-hop neighbor's text information: Towards a Theory of Optimal Similarity Measures way of learning a similarity measure from the: The effectiveness of a case-based reasoning system is known to depend critically on its similarity measure. However, it is not clear whether there are elusive and esoteric similarity measures which might improve the performance of a case-based reasoner if substituted for the more commonly used measures. This paper therefore deals with the problem of choosing the best similarity measure, in the limited context of instance-based learning of classifications of a discrete example space. We consider both `fixed' similarity measures and `learnt' ones. In the former case, we give a definition of a similarity measure which we believe to be `optimal' w.r.t. the current prior distribution of target concepts and prove its optimality within a restricted class of similarity measures. We then show how this `optimal' similarity measure is instantiated by some specific prior distributions, and conclude that a very simple similarity measure is as good as any other in these cases. In a further section, we then show how our definition leads naturally to a conjecture about the 1-hop neighbor's text information: The Power of Decision Tables, : We evaluate the power of decision tables as a hypothesis space for supervised learning algorithms. Decision tables are one of the simplest hypothesis spaces possible, and usually they are easy to understand. Experimental results show that on artificial and real-world domains containing only discrete features, IDTM, an algorithm inducing decision tables, can sometimes outperform state-of-the-art algorithms such as C4.5. Surprisingly, performance is quite good on some datasets with continuous features, indicating that many datasets used in machine learning either do not require these features, or that these features have few values. We also describe an incremental method for performing cross-validation that is applicable to incremental learning algorithms including IDTM. Using incremental cross-validation, it is possible to cross-validate a given dataset and IDTM in time that is linear in the number of instances, the number of features, and the number of label values. The time for incremental cross-validation is independent of the number of folds chosen, hence leave-one-out cross-validation and ten-fold cross-validation take the same time. 1-hop neighbor's text information: Inductive bias in case-based reasoning systems. : In order to learn more about the behaviour of case-based reasoners as learning systems, we form-alise a simple case-based learner as a PAC learning algorithm, using the case-based representation hCB; i. We first consider a `naive' case-based learning algorithm CB1( H ) which learns by collecting all available cases into the case-base and which calculates similarity by counting the number of features on which two problem descriptions agree. We present results concerning the consistency of this learning algorithm and give some partial results regarding its sample complexity. We are able to characterise CB1( H ) as a `weak but general' learning algorithm. We then consider how the sample complexity of case-based learning can be reduced for specific classes of target concept by the application of inductive bias, or prior knowledge of the class of target concepts. Following recent work demonstrating how case-based learning can be improved by choosing a similarity measure appropriate to the concept being learnt, we define a second case-based learning `algorithm' CB2 which learns using the best possible similarity measure that might be inferred for the chosen target concept. While CB2 is not an executable learning strategy (since the chosen similarity measure is defined in terms of a priori knowledge of the actual target concept) it allows us to assess in the limit the maximum possible contribution of this approach to case-based learning. Also, in addition to illustrating the role of inductive bias, the definition of CB2 simplifies the general problem of establishing which functions might be represented in the form hCB; i. Reasoning about the case-based representation in this special case has therefore been a little more straight-forward than in the general case of CB1( H ), allowing more substantial results regarding representable functions and sample complexity to be presented for CB2. In assessing these results, we are forced to conclude that case-based learning is not the best approach to learning the chosen concept space (the space of monomial functions). We discuss, however, how our study has demonstrated, in the context of case-based learning, the operation of concepts well known in machine learning such as inductive bias and the trade-off between computational complexity and sample complexity. Target text information: A yardstick for the evaluation of case-based classifiers. : This paper proposes that the generalisation capabilities of a case-based reasoning system can be evaluated by comparison with a `rote-learning' algorithm which uses a very simple generalisation strategy. Two such algorithms are defined, and expressions for their classification accuracy are derived as a function of the size of training sample. A series of experiments using artificial and `natural' data sets is described in which the learning curve for a case-based learner is compared with those for the apparently trivial rote-learning learning algorithms. The results show that in a number of `plausible' situations, the learning curves for a simple case-based learner and the `majority' rote-learner can barely be distinguished, although a domain is demonstrated where favourable performance from the case-based learner is observed. This suggests that the maxim of case-based reasoning that `similar problems have similar solutions' may be useful as the basis of a generalisation strategy only in selected domains. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,964
test
1-hop neighbor's text information: A Next Generation Neurally Based Autonomous Road Follower, : The use of artificial neural networks in the domain of autonomous vehicle navigation has produced promising results. ALVINN [Pomerleau, 1991] has shown that a neural system can drive a vehicle reliably and safely on many different types of roads, ranging from paved paths to interstate highways. Even with these impressive results, several areas within the neural paradigm for autonomous road following still need to be addressed. These include transparent navigation between roads of different type, simultaneous use of different sensors, and generalization to road types which the neural system has never seen. The system presented here addresses these issue with a modular neural architecture which uses pre-trained ALVINN networks and a connectionist superstructure to robustly drive on many dif ferent types of roads. Target text information: Automated Highway System: ALVINN (Autonomous Land Vehicle in a Neural Net) is a Backpropagation trained neural network which is capable of autonomously steering a vehicle in road and highway environments. Although ALVINN is fairly robust, one of the problems with it has been the time it takes to train. As the vehicle is capable of on-line learning, the driver has to drive the car for about 2 minutes before the network is capable of autonomous operation. One reason for this is the use of Backprop. In this report, we describe the original ALVINN system, and then look at three alternative training methods - Quickprop, Cascade Correlation, and Cascade 2. We then run a series of trials using Quickprop, Cascade Correlation and Cascade2, and compare them to a BackProp baseline. Finally, a hidden unit analysis is performed to determine what the network is learning. Applying Advanced Learning Algorithms to ALVINN I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
462
test
1-hop neighbor's text information: Using Introspective Reasoning to Select Learning Strategies. : In order to learn effectively, a system must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires a declaratflive representation of the reasoning performed by the system during the performance task. This paper presents a taxonomy of possible reasoning failures that can occur during this task, their declarative representations, and their associations with particular learning strategies. We propose a theory of Meta-XPs, which are explanation structures that help the system identify failure types and choose appropriate learning strategies in order to avoid similar mistakes in the future. A program called Meta-AQUA embodies the theory and processes examples in the domain of drug smuggling. Target text information: Abstract: Given an arbitrary learning situation, it is difficult to determine the most appropriate learning strategy. The goal of this research is to provide a general representation and processing framework for introspective reasoning for strategy selection. The learning framework for an introspective system is to perform some reasoning task. As it does, the system also records a trace of the reasoning itself, along with the results of such reasoning. If a reasoning failure occurs, the system retrieves and applies an introspective explanation of the failure in order to understand the error and repair the knowledge base. A knowledge structure called a Meta-Explanation Pattern is used to both explain how conclusions are derived and why such conclusions fail. If reasoning is represented in an explicit, declarative manner, the system can examine its own reasoning, analyze its reasoning failures, identify what it needs to learn, and select appropriate learning strategies in order to learn the required knowledge without overreli ance on the programmer. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
782
test
1-hop neighbor's text information: Competitive environments evolve better solutions for complex tasks. : 1-hop neighbor's text information: "Evolutionary Module Acquisition," : 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: "Coevolving High Level Representations," : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,316
test
1-hop neighbor's text information: Learning probabilistic automata with variable memory length. : We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Finite Suffix Automata. The learning algorithm is motivated by real applications in man-machine interaction such as handwriting and speech recognition. Conventionally used fixed memory Markov and hidden Markov models have either severe practical or theoretical drawbacks. Though general hardness results are known for learning distributions generated by sources with similar structure, we prove that our algorithm can indeed efficiently learn distributions generated by our more restricted sources. In Particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time and sample complexity. We demonstrate the applicability of our algorithm by learning the structure of natural English text and using our hy pothesis for the correction of corrupted text. 1-hop neighbor's text information: On the learnability of discrete distributions. : 1-hop neighbor's text information: Cryptographic limitations on learning boolean formulae and finite automata. : In this paper we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses. Our methods reduce the problems of cracking a number of well-known public-key cryptosys- tems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory: in particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography. We also apply our results to obtain strong intractability results for approximating a gener - alization of graph coloring. fl This research was conducted while the author was at Harvard University and supported by an A.T.& T. Bell Laboratories scholarship. y Supported by grants ONR-N00014-85-K-0445, NSF-DCR-8606366 and NSF-CCR-89-02500, DAAL03-86-K-0171, DARPA AFOSR 89-0506, and by SERC. Target text information: On the learnability and usage of acyclic probabilistic finite automata. : We propose and analyze a distribution learning algorithm for a subclass of Acyclic Probabilistic Finite Automata (APFA). This subclass is characterized by a certain distinguishability property of the automata's states. Though hardness results are known for learning distributions generated by general APFAs, we prove that our algorithm can indeed efficiently learn distributions generated by the subclass of APFAs we consider. In particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time. We present two applications of our algorithm. In the first, we show how to model cursively written letters. The resulting models are part of a complete cursive handwriting recognition system. In the second application we demonstrate how APFAs can be used to build multiple-pronunciation models for spoken words. We evaluate the APFA based pronunciation models on labeled speech data. The good performance (in terms of the log-likelihood obtained on test data) achieved by the APFAs and the incredibly small amount of time needed for learning suggests that the learning algorithm of APFAs might be a powerful alternative to commonly used probabilistic models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,947
test
1-hop neighbor's text information: Strongly Typed Genetic Programming. : BBN Technical Report #7866: Abstract Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called "strongly typed" genetic programming (STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate parse trees such that the arguments of each function in each tree have the required types. An extension to STGP which makes it easier to use is the concept of generic functions, which are not true strongly typed functions but rather templates for classes of such functions. To illustrate STGP, we present three examples involving vector and matrix manipulation: (1) a basis representation problem (which can be constructed to be deceptive by any reasonable definition of "deception"), (2) the n-dimensional least-squares regression problem, and (3) preliminary work on the Kalman filter. 1-hop neighbor's text information: Evolving cooperation strategies. : A key concern in genetic programming (GP) is the size of the state-space which must be searched for large and complex problem domains. One method to reduce the state-space size is by using Strongly Typed Genetic Programming (STGP). We applied both GP and STGP to construct cooperation strategies to be used by multiple predator agents to pursue and capture a prey agent on a grid-world. This domain has been extensively studied in Distributed Artificial Intelligence (DAI) as an easy-to-describe but difficult-to-solve cooperation problem. The evolved programs from our systems are competitive with manually derived greedy algorithms. In particular the STGP paradigm evolved strategies in which the predators were able to achieve their goal without explicitly sensing the location of other predators or communicating with other predators. This is an improvement over previous research in this area. The results of our experiments indicate that STGP is able to evolve programs that perform significantly better than GP evolved programs. In addition, the programs gener ated by STGP were easier to understand. 1-hop neighbor's text information: Strongly typed genetic programming in evolving cooperation strategies. : Target text information: Evolving behavioral strategies in predators and prey. : The predator/prey domain is utilized to conduct research in Distributed Artificial Intelligence. Genetic Programming is used to evolve behavioral strategies for the predator agents. To further the utility of the predator strategies, the prey population is allowed to evolve at the same time. The expected competitive learning cycle did not surface. This failing is investigated, and a simple prey algorithm surfaces, which is consistently able to evade capture from the predator algorithms. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,518
test
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. 1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models Target text information: Connectionist Modeling of the Fast Mapping Phenomenon: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
649
val
1-hop neighbor's text information: Some remarks on Scheiblechner\'s treatment of ISOP models. : Scheiblechner (1995) proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) that replaces Rasch's (1980) specific objectivity assumptions with two interesting ordinal assumptions. Special cases of Scheiblechner's model include standard unidimensional factor analysis models in which the loadings are held constant, and the Rasch model for binary item responses. Closely related are the doubly-monotone item response models of Mokken (1971; see also Mokken and Lewis, 1982; Si-jtsma, 1988; Molenaar, 1991; Sijtsma and Junker, 1996; and Sijtsma and Hemker, 1996). More generally, strictly unidimensional latent variable models have been considered in some detail by Holland and Rosenbaum (1986), Ellis and van den Wollenberg (1993), and Junker (1990, 1993). The purpose of this note is to provide connections with current research in foundations and nonparametric latent variable and item response modeling that are missing from Scheiblechner's (1995) paper, and to point out important related work by Hemker et al. (1996a,b), Ellis and Junker (1996) and Junker and Ellis (1996). We also discuss counterexamples to three major theorems in the paper. By carrying out these three tasks, we hope to provide researchers interested in the foundations of measurement and item response modeling the opportunity to give the ISOP approach the careful attention it deserves. 1-hop neighbor's text information: Latent and manifest monotonicity in item response models: 1-hop neighbor's text information: A characterization of monotone unidimensional latent variable models. : Target text information: A survey of theory and methods of invariant item ordering. To appear, : This work was initiated while Junker was visiting the University of Utrecht with the support of a Carnegie Mellon University Faculty Development Grant, and the generous hospitality of the Social Sciences Faculty, University of Utrecht. Additional support was provided by the Office of Naval Research, Cognitive Sciences Division, Grant N00014-87-K-0277 and the National Institute of Mental Health, Training Grant MH15758. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,491
test
1-hop neighbor's text information: Modeling Building-Block Interdependency Dynamical and Evolutionary Machine Organization Group: The Building-Block Hypothesis appeals to the notion of problem decomposition and the assembly of solutions from sub-solutions. Accordingly, there have been many varieties of GA test problems with a structure based on building-blocks. Many of these problems use deceptive fitness functions to model interdependency between the bits within a block. However, very few have any model of interdependency between building-blocks; those that do are not consistent in the type of interaction used intra-block and inter-block. This paper discusses the inadequacies of the various test problems in the literature and clarifies the concept of building-block interdependency. We formulate a principled model of hierarchical interdependency that can be applied through many levels in a consistent manner and introduce Hierarchical If-and-only-if (H-IFF) as a canonical example. We present some empirical results of GAs on H-IFF showing that if population diversity is maintained and linkage is tight then the GA is able to identify and manipulate building-blocks over many levels of assembly, as the Building-Block Hypothesis suggests. 1-hop neighbor's text information: The royal road for genetic algorithms: fitness landscapes and genetic algorithm performance. : Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ("Royal Road" functions), and present some initial experimental results concerning the role of crossover and "building blocks" on landscapes constructed from features of this class. 1-hop neighbor's text information: Genetic Algorithms and Very Fast Reannealing: A Comparison, : We compare Genetic Algorithms (GA) with a functional search method, Very Fast Simulated Reannealing (VFSR), that not only is efficient in its search strategy, but also is statistically guaranteed to find the function optima. GA previously has been demonstrated to be competitive with other standard Boltzmann-type simulated annealing techniques. Presenting a suite of six standard test functions to GA and VFSR codes from previous studies, without any additional fine tuning, strongly suggests that VFSR can be expected to be orders of magnitude more efficient than GA. Target text information: When will a genetic algorithm outperform hill climbing? In Stephanie Forrest, editor, : We analyze a simple hill-climbing algorithm (RMHC) that was previously shown to outperform a genetic algorithm (GA) on a simple "Royal Road" function. We then analyze an "idealized" genetic algorithm (IGA) that is significantly faster than RMHC and that gives a lower bound for GA speed. We identify the features of the IGA that give rise to this speedup, and discuss how these features can be incorporated into a real GA. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,039
test
1-hop neighbor's text information: "Evolution in Time and Space: The Parallel Genetic Algorithm." In Foundations of Genetic Algorithms, : The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: Genetic Programming Methodology, Parallelization and Applications par: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
544
val
1-hop neighbor's text information: An Efficient Transformation for Implementing Two-Layer FeedForward Neural Networks. : Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing distributed feedforward networks that use dynamic topologies (dynamic ANNs) efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents an LIT for dynamic Backpropagation networks with a single hidden layer. The complexity of both learning and execution algorithms is O(n+p+logm) for a single pattern, where nis the number of inputs, p is the number of outputs, and m is the number of hidden nodes in the original network. Keywords: Neural Networks, Backpropagation, Implementation Design, Dynamic Topologies, Reconfigurable Architectures. 1-hop neighbor's text information: A VLSI Implementation of a Parallel, Self-Organizing Learning Model, : This paper presents a VLSI implementation of the Priority Adaptive Self-Organizing Concurrent System (PASOCS) learning model that is built using a multi-chip module (MCM) substrate. Many current hardware implementations of neural network learning models are direct implementations of classical neural network structures|a large number of simple computing nodes connected by a dense number of weighted links. PASOCS is one of a class of ASOCS (Adaptive Self-Organizing Concurrent System) connectionist models whose overall goal is the same as classical neural networks models, but whose functional mechanisms differ significantly. This model has potential application in areas such as pattern recognition, robotics, logical inference, and dynamic control. 1-hop neighbor's text information: A self-organizing binary decision tree for incrementally defined rule based systems. : This paper presents an ASOCS (adaptive self-organizing concurrent system) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. This paper focuses on adaptive algorithm 3 (AA3) and details its architecture and learning algorithm. It has advantages over previous ASOCS models in simplicity, implementability, and cost. An ASOCS can operate in either a data processing mode or a learning mode. During the data processing mode, an ASOCS acts as a parallel hardware circuit. In learning mode, rules expressed as boolean conjunctions are incrementally presented to the ASOCS. All ASOCS learning algorithms incorporate a new rule in a distributed fashion in a short, bounded time. Target text information: A multi-chip module implementation of a neural network. : The requirement for dense interconnect in artificial neural network systems has led researchers to seek high-density interconnect technologies. This paper reports an implementation using multi-chip modules (MCMs) as the interconnect medium. The specific system described is a self-organizing, parallel, and dynamic learning model which requires a dense interconnect technology for effective implementation; this requirement is fulfilled by exploiting MCM technology. The ideas presented in this paper regarding an MCM implementation of artificial neural networks are versatile and can be adapted to apply to other neural network and connectionist models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,603
val
1-hop neighbor's text information: Natural language grammatical inference: A comparison of recurrent neural networks and machine learning methods. : This paper examines the inductive inference of a complex grammar with neural networks specifically, the task considered is that of training a network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. Neural networks are trained, without the division into learned vs. innate components assumed by Chomsky, in an attempt to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. How a recurrent neural network could possess linguistic capability, and the properties of various common recurrent neural network architectures are discussed. The problem exhibits training behavior which is often not present with smaller grammars, and training was initially difficult. However, after implementing several techniques aimed at improving the convergence of the gradient descent backpropagation-through-time training algorithm, significant learning was possible. It was found that certain architectures are better able to learn an appropriate grammar. The operation of the networks and their training is analyzed. Finally, the extraction of rules in the form of deterministic finite state automata is investigated. Target text information: Simple Synchrony Networks: Learning Generalisations across Syntactic Constituents: This paper describes a training algorithm for Simple Synchrony Networks (SSNs), and reports on experiments in language learning using a recursive grammar. The SSN is a new connectionist architecture combining a technique for learning about patterns across time, Simple Recurrent Networks (SRNs), with Temporal Synchrony Variable Binding (TSVB). The use of TSVB means the SSN can learn about entities in the training set, and generalise this information to entities in the test set. In the experiments, the network is trained on sentences with up to one embedded clause, and with some words restricted to certain classes of constituent. During testing, the network generalises information learned to sentences with up to three embedded clauses, and with words appearing in any constituent. These results demonstrate that SSNs learn generalisations across syntactic constituents. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
610
test
1-hop neighbor's text information: A VLIW/SIMD Microprocessor for Artificial Neural Network Computations. : SPERT (Synthetic PERceptron Testbed) is a fully programmable single chip microprocessor designed for efficient execution of artificial neural network algorithms. The first implementation will be in a 1.2 m CMOS technology with a 50MHz clock rate, and a prototype system is being designed to occupy a double SBus slot within a Sun Sparcstation. SPERT will sustain over 300 fi 10 6 connections per second during pattern classification, and around 100 fi 10 6 connection updates per second while running the popular error backpropagation training algorithm. This represents a speedup of around two orders of magnitude over a Sparcstation-2 for algorithms of interest. An earlier system produced by our group, the Ring Array Processor (RAP), used commercial DSP chips. Compared with a RAP multiprocessor of similar performance, SPERT represents over an order of magnitude reduction in cost for problems where fixed-point arithmetic is satisfactory. fl International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704 Target text information: "Object-Oriented Design of Parallel BP Neural Network Simulator and Implementation on the connection machine CM-5", : In this paper we describe the implementation of the backpropagation algorithm by means of an object oriented library (ARCH). The use of this library relieve the user from the details of a specific parallel programming paradigm and at the same time allows a greater portability of the generated code. To provide a comparision with existing solutions, we survey the most relevant implementations of the algorithm proposed so far in the literature, both on dedicated and general purpose computers. Extensive experimental results show that the use of the library does not hurt the performance of our simulator, on the contrary our implementation on a Connection Machine (CM-5) is comparable with the fastest in its category. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,474
test
1-hop neighbor's text information: An adaptation of Relief for attribute estimation in regression: Heuristic measures for estimating the quality of attributes mostly assume the independence of attributes so in domains with strong dependencies between attributes their performance is poor. Relief and its extension ReliefF are capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. By exploiting local information provided by different contexts they provide a global view. We present the analysis of Reli-efF which lead us to its adaptation to regression (continuous class) problems. The experiments on artificial and real-world data sets show that Re-gressional ReliefF correctly estimates the quality of attributes in various conditions, and can be used for non-myopic learning of the regression trees. Regressional ReliefF and ReliefF provide a unified view on estimating the attribute quality in regression and classification. 1-hop neighbor's text information: `Non-myopic attribute estimation in regression\', : One of key issues in both discrete and continuous class prediction and in machine learning in general seems to be the problem of estimating the quality of attributes. Heuristic measures mostly assume independence of attributes and therefore cannot be successfully used in domains with strong dependencies between attributes. Relief and its extension ReliefF are statistical methods capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. Following the analysis of ReliefF we have extended it to continuous class problems. Regressional ReliefF (RReliefF) and ReliefF provide a unified view on estimation of quality of attributes. The experiments show that RReliefF successfully estimates the quality of attributes and can be used for non-myopic learning of regression trees. 1-hop neighbor's text information: Learning to Refine Case Libraries:: Initial Results Abstract. Conversational case-based reasoning (CBR) systems, which incrementally extract a query description through a user-directed conversation, are advertised for their ease of use. However, designing large case libraries that have good performance (i.e., precision and querying efficiency) is difficult. CBR vendors provide guidelines for designing these libraries manually, but the guidelines are difficult to apply. We describe an automated inductive approach that revises conversational case libraries to increase their conformance with design guidelines. Revision increased performance on three conversational case libraries. Target text information: Context-sensitive feature selection for lazy learners. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,075
test
1-hop neighbor's text information: "On the effect of analog noise on discrete-time analog computations", : We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the Target text information: Turing computability with neural nets. : This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. High-order connections are not required. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,398
val
1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained. 1-hop neighbor's text information: Phoneme probability estimation with dynamic sparsely connected artificial neural networks. : This paper presents new methods for training large neural networks for phoneme probability estimation. An architecture combining timedelay windows and recurrent connections is used to capture the important dynamic information of the speech signal. Because the number of connections in a fully connected recurrent network grows super-linear with the number of hidden units, schemes for sparse connection and connection pruning are explored. It is found that sparsely connected networks outperform their fully connected counterparts with an equal number of connections. The implementation of the combined architecture and training scheme is described in detail. The networks are evaluated in a hybrid HMM/ANN system for phoneme recognition on the TIMIT database, and for word recognition on the WAXHOLM database. The achieved phone error-rate, 27.8%, for the standard 39 phoneme set on the core testset of the TIMIT database is in the range of the lowest reported. All training and simulation software used is made freely available by the author, and detailed information about the software and the training process is given in an Appendix. 1-hop neighbor's text information: A comparison of some error estimates for neural network models. : We discuss a number of methods for estimating the standard error of predicted values from a multi-layer perceptron. These methods include the delta method based on the Hessian, bootstrap estimators, and the "sandwich" estimator. The methods are described and compared in a number of examples. We find that the bootstrap methods perform best, partly because they capture variability due to the choice of starting weights. Target text information: Computing second derivatives in feed-forward networks: a review. : The calculation of second derivatives is required by recent training and analysis techniques of connectionist networks, such as the elimination of superfluous weights, and the estimation of confidence intervals both for weights and network outputs. We here review and develop exact and approximate algorithms for calculating second derivatives. For networks with jwj weights, simply writing the full matrix of second derivatives requires O(jwj 2 ) operations. For networks of radial basis units or sigmoid units, exact calculation of the necessary intermediate terms requires of the order of 2h + 2 backward/forward-propagation passes where h is the number of hidden units in the network. We also review and compare three approximations (ignoring some components of the second derivative, numerical differentiation, and scoring). Our algorithms apply to arbitrary activation functions, networks, and error functions (for instance, with connections that skip layers, or radial basis functions, or cross-entropy error and Softmax units, etc.). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,071
test
1-hop neighbor's text information: "Coevolving High Level Representations," : 1-hop neighbor's text information: A Hybrid GP/GA Approach for Co-evolving Controllers and Robot Bodies to Achieve Fitness-Specified Tasks. : Evolutionary approaches have been advocated to automate robot design. Some research work has shown the success of evolving controllers for the robots by genetic approaches. As we can observe, however, not only the controller but also the robot body itself can affect the behavior of the robot in a robot system. In this paper, we develop a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. In order to assess the performance of the developed approach, it is used to evolve a simulated agent, with its own controller and body, to do obstacle avoidance in the simulated environment. Experimental results show the promise of this work. In addition, the importance of co-evolving controllers and robot bodies is analyzed and discussed in this paper. 1-hop neighbor's text information: Artificial evolution of visual control systems for robots. : Many arthropods (particularly insects) exhibit sophisticated visually guided behaviours. Yet in most cases the behaviours are guided by input from a few hundreds or thousands of "pixels" (i.e. ommatidia in the compound eye). Inspired by this observation, we have for several years been exploring the possibilities of visually guided robots with low-bandwidth vision. Rather than design the robot controllers by hand, we use artificial evolution (in the form of an extended genetic algorithm) to automatically generate the architectures for artificial neural networks which generate effective sensory-motor coordination when controlling mobile robots. Analytic techniques drawn from neuroethology and dynamical systems theory allow us to understand how the evolved robot controllers function, and to predict their behaviour in environments other than those used during the evolutionary process. Initial experiments were performed in simulation, but the techniques have now been successfully transferred to work with a variety of real physical robot platforms. This chapter reviews our past work, concentrating on the analysis of evolved controllers, and gives an overview of our current research. We conclude with a discussion of the application of our evolutionary techniques to problems in biological vision. Target text information: of a simulator for evolving morphology are: Universal the simulator should cover an infinite gen: Funes, P. and Pollack, J. (1997) Computer Evolution of Buildable Objects. Fourth European Conference on Artificial Life. P. Husbands and I. Harvey, eds., MIT Press. pp 358-367. knowledge into the program, which would result in familiar structures, we provided the algorithm with a model of the physical reality and a purely utilitarian fitness function, thus supplying measures of feasibility and functionality. In this way the evolutionary process runs in an environment that has not been unnecessarily constrained. We added, however, a requirement of computability to reject overly complex structures when they took too long for our simulations to evaluate. The results are encouraging. The evolved structures had a surprisingly alien look: they are not based in common knowledge on how to build with brick toys; instead, the computer found ways of its own through the evolutionary search process. We were able to assemble the final designs manually and confirm that they accomplish the objectives introduced with our fitness functions. After some background on related problems, we describe our physical simulation model for two-dimensional Lego structures, and the representation for encoding them and applying evolution. We demonstrate the feasibility of our work with photos of actual objects which were the result of particular optimizations. Finally, we discuss future work and draw some conclusions. In order to evolve both the morphology and behavior of autonomous mechanical devices which can be manufactured, one must have a simulator which operates under several constraints, and a resultant controller which is adaptive enough to cover the gap between simulated and real world. eral space of mechanisms. Conservative - because simulation is never perfect, it should preserve a margin of safety. Efficient - it should be quicker to test in simulation than through physical production and test. Buildable - results should be convertible from a simula tion to a real object Computer Evolution of Buildable Objects Abstract The idea of co-evolution of bodies and brains is becoming popular, but little work has been done in evolution of physical structure because of the lack of a general framework for doing it. Evolution of creatures in simulation has been constrained by the reality gap which implies that resultant objects are usually not buildable. The work we present takes a step in the problem of body evolution by applying evolutionary techniques to the design of structures assembled out of parts. Evolution takes place in a simulator we designed, which computes forces and stresses and predicts failure for 2-dimensional Lego structures. The final printout of our program is a schematic assembly, which can then be built physically. We demonstrate its functionality in several different evolved entities. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
564
test
1-hop neighbor's text information: Szepesvari and M.L. Littman. A unified analysis of value-function-based reinforcement-learning algorithms. : Reinforcement learning is the problem of generating optimal behavior in a sequential decision-making environment given the opportunity of interacting with it. Many algorithms for solving reinforcement-learning problems work by computing improved estimates of the optimal value function. We extend prior analyses of reinforcement-learning algorithms and present a powerful new theorem that can provide a unified analysis of value-function-based reinforcement-learning algorithms. The usefulness of the theorem lies in how it allows the asynchronous convergence of a complex reinforcement-learning algorithm to be proven by verifying that a simpler synchronous algorithm converges. We illustrate the application of the theorem by analyzing the convergence of Q-learning, model-based reinforcement learning, Q-learning with multi-state updates, Q-learning for Markov games, and risk-sensitive reinforcement learning. 1-hop neighbor's text information: Analytical mean squared error curves in temporal difference learning. : We have calculated analytical expressions for how the bias and variance of the estimators provided by various temporal difference value estimation algorithms change with o*ine updates over trials in absorbing Markov chains using lookup table representations. We illustrate classes of learning curve behavior in various chains, and show the manner in which TD is sensitive to the choice of its step size and eligibility trace parameters. 1-hop neighbor's text information: : Empirical Comparison of Gradient Descent and Exponentiated Gradient Descent in Supervised and Reinforcement Learning Technical Report 96-70 Target text information: Reinforcement learning with replacing eligibility traces. : The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the replacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the o*ine TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that the method corresponding to replacing traces is closely related to the maximum likelihood solution for these tasks, and that its mean squared error is always lower in the long run. Computational results confirm these analyses and show that they are applicable more generally. In particular, we show that replacing traces significantly improve performance and reduce parameter sensitivity on the "Mountain-Car" task, a full reinforcement-learning problem with a continuous state space, when using a feature-based function approximator. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,535
test
1-hop neighbor's text information: Smoothing spline models with correlated random errors, : 1-hop neighbor's text information: "Bootstrap Confidence Intervals for Smoothing Splines and Their Comparison to Bayesian Confidence Intervals". : Bayesian confidence intervals of a smoothing spline are often used to distinguish two curves. In this paper, we provide an asymptotic formula for sample size calculations based on Bayesian confidence intervals. Approximations and simulations on special functions indicate that this asymptotic formula is reasonably accurate. Key Words: Bayesian confidence intervals; sample size; smoothing spline. fl Address: Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110. Tel.: (805)893-4870. Fax: (805)893-2334. E-mail: [email protected]. Supported by the National Institute of Health under Grants R01 EY09946, P60 DK20572 and P30 HD18258. Target text information: Behavior near zero of the distribution of GCV smoothing parameter estimates for splines, : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
941
val
1-hop neighbor's text information: "On the Markov equivalence of chain graphs, undirected graphs, and acyclic digraphs", : Acyclic digraphs (ADGs) are widely used to describe dependences among variables in multivariate distributions. In particular, the likelihood functions of ADG models admit convenient recursive factorizations that often allow explicit maximum likelihood estimates and that are well suited to building Bayesian networks for expert systems. There may, however, be many ADGs that determine the same dependence (= Markov) model. Thus, the family of all ADGs with a given set of vertices is naturally partitioned into Markov-equivalence classes, each class being associated with a unique statistical model. Statistical procedures, such as model selection or model averaging, that fail to take into account these equivalence classes, may incur substantial computational or other inefficiencies. Recent results have shown that each Markov-equivalence class is uniquely determined by a single chain graph, the essential graph, that is itself Markov-equivalent simultaneously to all ADGs in the equivalence class. Here we propose two stochastic Bayesian model averaging and selection algorithms for essential graphs and apply them to the analysis of three discrete-variable data sets. 1-hop neighbor's text information: (1993) Linear dependencies represented by chain graphs. : 8] Dori, D. and Tarsi, M., "A Simple Algorithm to Construct a Consistent Extension of a Partially Oriented Graph," Computer Science Department, Tel-Aviv University. Also Technical Report R-185, UCLA, Cognitive Systems Laboratory, October 1992. [14] Pearl, J. and Wermuth, N., "When Can Association Graphs Admit a Causal Interpretation?," UCLA, Cognitive Systems Laboratory, Technical Report R-183-L, November 1992. [17] Verma, T.S. and Pearl, J., "Deciding Morality of Graphs is NP-complete," Technical Report R-188, UCLA, Cognitive Systems Laboratory, October 1992. 1-hop neighbor's text information: Graphical Models in Applied Multivariate Statistics. : Target text information: An alternative Markov property for chain graphs. : Graphical Markov models use graphs, either undirected, directed, or mixed, to represent possible dependences among statistical variables. Applications of undirected graphs (UDGs) include models for spatial dependence and image analysis, while acyclic directed graphs (ADGs), which are especially convenient for statistical analysis, arise in such fields as genetics and psychometrics and as models for expert systems and Bayesian belief networks. Lauritzen, Wer-muth, and Frydenberg (LWF) introduced a Markov property for chain graphs, which are mixed graphs that can be used to represent simultaneously both causal and associative dependencies and which include both UDGs and ADGs as special cases. In this paper an alternative Markov property (AMP) for chain graphs is introduced, which in some ways is a more direct extension of the ADG Markov property than is the LWF property for chain graph. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,114
val
1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : 1-hop neighbor's text information: "A Coevolutionary Approach to Learning Sequential Decision Rules", : We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions. 1-hop neighbor's text information: ADAPTIVE TESTING OF CONTROLLERS FOR AUTONOMOUS VEHICLES: Autonomous vehicles are likely to require sophisticated software controllers to maintain vehicle performance in the presence of vehicle faults. The test and evaluation of complex software controllers is expected to be a challenging task. The goal of this e ffort is to apply machine learning techniques from the field of arti ficial intelligence to the general problem of evaluating an intelligent controller for an autonomous vehicle. The approach involves subjecting a controller to an adaptively chosen set of fault scenarios within a vehicle simulator, and searching for combinations of faults that produce noteworthy performance by the vehicle controller. The search employs a genetic algorithm. We illustrate the approach by evaluating the performance of a subsumption-based controller for an autonomous vehicle. The preliminary evidence suggests that this approach is an e ffective alternative to manual testing of sophisticated software controllers. Target text information: "Learning sequential decision rules using simulation models and competition," : The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical decision rules from a simple flight simulator. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Several experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
949
val
1-hop neighbor's text information: "Adaptive source separation without prewhitening," : Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach. 1-hop neighbor's text information: Asymptotic statistical theory of overtraining and cross-validation. : A statistical theory for overtraining is proposed. The analysis treats realizable stochastic neural networks, trained with Kullback-Leibler loss in the asymptotic case. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and testing sets in order to obtain the optimum performance. In the non-asymptotic region cross-validated early stopping always decreases the generalization error. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings. 1-hop neighbor's text information: NIPS*97 The Efficiency and The Robustness of Natural Gradient Descent Learning Rule: We have discovered a new scheme to represent the Fisher information matrix of a stochastic multi-layer perceptron. Based on this scheme, we have designed an algorithm to compute the inverse of the Fisher information matrix. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n 2 ) while the complexity of conventional algorithms for the same purpose is of order O(n 3 ). The inverse of the Fisher information matrix is used in the natural gradient descent algorithm to train single-layer or multi-layer perceptrons. It is confirmed by simulation that the natural gradient Target text information: Natural gradient descent for training multi-layer perceptrons. : The main difficulty in implementing the natural gradient learning rule is to compute the inverse of the Fisher information matrix when the input dimension is large. We have found a new scheme to represent the Fisher information matrix. Based on this scheme, we have designed an algorithm to compute the inverse of the Fisher information matrix. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n 2 ) while the complexity of conventional algorithms for the same purpose is of order O(n 3 ). The simulation has confirmed the efficience and robustness of the natural gradient learning rule. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,574
val
1-hop neighbor's text information: Environmental Effects on Minimal Behaviors in the Minimat World: The structure of an environment affects the behaviors of the organisms that have evolved in it. How is that structure to be described, and how can its behavioral consequences be explained and predicted? We aim to establish initial answers to these questions by simulating the evolution of very simple organisms in simple environments with different structures. Our artificial creatures, called "minimats," have neither sensors nor memory and behave solely by picking amongst the actions of moving, eating, reproducing, and sitting, according to an inherited probability distribution. Our simulated environments contain only food (and multiple minimats) and are structured in terms of their spatial and temporal food density and the patchiness with which the food appears. Changes in these environmental parameters affect the evolved behaviors of minimats in different ways, and all three parameters are of importance in describing the minimat world. One of the most useful behavioral strategies that evolves is "looping" movement, which allows minimats-despite their lack of internal state-to match their behavior to the temporal (and spatial) structure of their environment. Ultimately we find that minimats construct their own environments through their individual behaviors, making the study of the impact of global environment structure on individual behavior much more complex. 1-hop neighbor's text information: No Gain: Landscapes, Learning Costs and Genetic Assimilation, submitted to EC and University of Sussex, CSRP 409(?) [Merezhkovsky, KS, 1920] in Khakina LN (1992) Concepts of Symbiogenesis: History of Symbiogenesis as an evolutionary mechanism, : The evolution of a population can be guided by phenotypic traits acquired by members of that population during their lifetime. This phenomenon, known as the Baldwin Effect, can speed the evolutionary process as traits that are initially acquired become genetically specified in later generations. This paper presents conditions under which this genetic assimilation can take place. As well as the benefits that lifetime adaptation can give a population, there may be a cost to be paid for that adaptive ability. It is the evolutionary trade-off between these costs and benefits that provides the selection pressure for acquired traits to become genetically specified. It is also noted that genotypic space, in which evolution operates, and phenotypic space, on which adaptive processes (such as learning) operate, are, in general, of a different nature. To guarantee an acquired characteristic can become genetically specified, then these spaces must have the property of neighbourhood correlation which means that a small distance between two individuals in phenotypic space implies that there is a small distance between the same two individuals in genotypic space. 1-hop neighbor's text information: Guiding or Hiding: Explorations into the Effects of Learning on the Rate of Evolution.: Individual lifetime learning can `guide' an evolving population to areas of high fitness in genotype space through an evolutionary phenomenon known as the Baldwin effect (Baldwin, 1896; Hin-ton & Nowlan, 1987). It is the accepted wisdom that this guiding speeds up the rate of evolution. By highlighting another interaction between learning and evolution, that will be termed the Hiding effect, it will be argued here that this depends on the measure of evolutionary speed one adopts. The Hiding effect shows that learning can reduce the selection pressure between individuals by `hiding' their genetic differences. There is thus a trade-off between the Baldwin effect and the Hiding effect to determine learning's influence on evolution and two factors that contribute to this trade-off, the cost of learning and landscape epis tasis, are investigated experimentally. Target text information: Evolving sensors in environments of controlled complexity. : 1 . Sensors represent a crucial link between the evolutionary forces shaping a species' relationship with its environment, and the individual's cognitive abilities to behave and learn. We report on experiments using a new class of "latent energy environments" (LEE) models to define environments of carefully controlled complexity which allow us to state bounds for random and optimal behaviors that are independent of strategies for achieving the behaviors. Using LEE's analytic basis for defining environments, we then use neural networks (NNets) to model individuals and a steady - state genetic algorithm to model an evolutionary process shaping the NNets, in particular their sensors. Our experiments consider two types of "contact" and "ambient" sensors, and variants where the NNets are not allowed to learn, learn via error correction from internal prediction, and via reinforcement learning. We find that predictive learning, even when using a larger repertoire of the more sophisticated ambient sensors, provides no advantage over NNets unable to learn. However, reinforcement learning using a small number of crude contact sensors does provide a significant advantage. Our analysis of these results points to a tradeoff between the genetic "robustness" of sensors and their informativeness to a learning system. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
30
train
1-hop neighbor's text information: "Exploration and model building in mobile robot domains", : I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours. 1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. 1-hop neighbor's text information: Representation and evolution of neural networks. : An evolutionary approach for developing improved neural network architectures is presented. It is shown that it is possible to use genetic algorithms for the construction of backpropagation networks for real world tasks. Therefore a network representation is developed with certain properties. Results with various application are presented. Target text information: Learning by error-driven decomposition. : In this paper we describe a new selforganizing decomposition technique for learning high-dimensional mappings. Problem decomposition is performed in an error-driven manner, such that the resulting subtasks (patches) are equally well approximated. Our method combines an unsupervised learning scheme (Feature Maps [Koh84]) with a nonlinear approximator (Backpropagation [RHW86]). The resulting learning system is more stable and effective in changing environments than plain backpropagation and much more powerful than extended feature maps as proposed by [RS88, RMS89]. Extensions of our method give rise to active exploration strategies for autonomous agents facing unknown environments. The appropriateness of our general purpose method will be demonstrated with an ex ample from mathematical function approximation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,575
test
1-hop neighbor's text information: A portable parallel programming language for artificial neural networks. : CuPit-2 is a programming language specifically designed to express neural network learning algorithms. It provides most of the flexibility of general-purpose languages like C/C ++ , but results in much clearer and more elegant programs due to higher expressiveness, in particular for algorithms that change the network topology dynamically (constructive algorithms, pruning algorithms). Furthermore, CuPit-2 programs can be compiled into efficient code for parallel machines; no changes are required in the source program. This article presents a description of the language constructs and reports performance results for an implementation of CuPit-2 on symmetric multiprocessors (SMPs). 1-hop neighbor's text information: Proben1: A set of neural network benchmark problems and benchmarking rules. : Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison. 1-hop neighbor's text information: A study of experimental evaluations of neural network learning algorithms: Current research practice. : 113 articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. Every third of them does employ not even a single realistic or real learning problem. Only 6% of all articles present results for more than one problem using real world data. Furthermore, one third of all articles does not present any quantitative comparison with a previously known algorithm. These results indicate that the quality of research in the area of neural network learning algorithms needs improvement. The publication standards should be raised and easily accessible collections of example problems be built. Contents Target text information: Connection pruning with static and adaptive pruning schedules. : Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (e.g. OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. Results of statistical significance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 different problems. The results indicate that training with pruning is often significantly better and rarely significantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,145
test
1-hop neighbor's text information: Schultz (1994). "An evolutionary approach to learning in robots," Machine Learning Workshop on Robot Learning, : Evolutionary learning methods have been found to be useful in several areas in the development of intelligent robots. In the approach described here, evolutionary algorithms are used to explore alternative robot behaviors within a simulation model as a way of reducing the overall knowledge engineering effort. This paper presents some initial results of applying the SAMUEL genetic learning system to a collision avoidance and navigation task for mobile robots. 1-hop neighbor's text information: EVOLVING ROBOT BEHAVIORS: This paper discusses the use of evolutionary computation to evolve behaviors that exhibit emergent intelligent behavior. Genetic algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the actual robot. Some of the emergent behavior is described in detail. 1-hop neighbor's text information: "Genetic and Non-Genetic Operators in Alecsys," : It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficult to regulate interplay between the reward system and the background genetic algorithm (GA), rule chains instability, default hierarchies instability, are only a few. ALECSYS is a parallel version of a standard learning classifier system (CS), and as such suffers of these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec, a new genetic operator used to specialize potentially useful classifiers. Energy, a quantity introduced to measure global convergence in order to apply the genetic algorithm only when the system is close to a steady state. Dynamical adjustment of the classifiers set cardinality, in order to speed up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source. Target text information: Genetic-based machine learning and behavior based robotics: a new syntesis. : difficult. We face this problem using an architecture based on learning classifier systems and on the description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to structural properties of animal behavioural organization, as proposed by ethologists. After a I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,517
test
1-hop neighbor's text information: Extensions of Fill\'s algorithm for perfect simulation. : Fill's algorithm for perfect simulation for attractive finite state space models, unbiased for user impatience, is presented in terms of stochastic recursive sequences and extended in two ways. Repulsive discrete Markov random fields with two coding sets like the auto-Poisson distribution on a lattice with 4-neighbourhood can be treated as monotone systems if a particular partial ordering and quasi-maximal and quasi-minimal states are used. Fill's algorithm then applies directly. Combining Fill's rejection sampling with sandwiching leads to a version of the algorithm, which works for general discrete conditionally specified repulsive models. Extensions to other types of models are briefly discussed. Target text information: Exact simulation using Markov chains. : This reports gives a review of the new exact simulation algorithms using Markov chains. The first part covers the discrete case. We consider two different algorithms, Propp and Wilsons coupling from the past (CFTP) technique and Fills rejection sampler. The algorithms are tested on the Ising model, with and without an external field. The second part covers continuous state spaces. We present several algorithms developed by Murdoch and Green, all based on coupling from the past. We discuss the applicability of these methods on a Bayesian analysis problem of surgical failure rates. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,275
test
1-hop neighbor's text information: Inferential Theory of Learning: Developing Foundations for Multistrategy Learning, in Machine Learning: A Multistrategy Approach, Vol. IV, R.S. : The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization. 1-hop neighbor's text information: 21 Using n 2 classifier in constructive induction: In this paper, we propose a multi-classification approach for constructive induction. The idea of an improvement of classification accuracy is based on iterative modification of input data space. This process is independently repeated for each pair of n classes. Finally, it gives (n 2 n)/2 input data subspaces of attributes dedicated for optimal discrimination of appropriate pairs of classes. We use genetic algorithms as a constructive induction engine. A final classification is obtained by a weighted majority voting rule, according to n 2 - classifier approach. The computational experiment was performed on medical data set. The obtained results point out the advantage of using a multi-classification model (n 2 classifier) in constructive induction in relation to the analogous single-classifier approach. 1-hop neighbor's text information: Learning to integrate multiple knowledge sources for case-based reasoning. : The case-based reasoning process depends on multiple overlapping knowledge sources, each of which provides an opportunity for learning. Exploiting these opportunities requires not only determining the learning mechanisms to use for each individual knowledge source, but also how the different learning mechanisms interact and their combined utility. This paper presents a case study examining the relative contributions and costs involved in learning processes for three different knowledge sources|cases, case adaptation knowledge, and similarity information|in a case-based planner. It demonstrates the importance of interactions between different learning processes and identifies a promising method for integrating multiple learning methods to improve case-based reasoning. Target text information: Machine Learning: A Multistrategy Approach, : Machine learning techniques are perceived to have a great potential as means for the acquisition of knowledge; nevertheless, their use in complex engineering domains is still rare. Most machine learning techniques have been studied in the context of knowledge acquisition for well defined tasks, such as classification. Learning for these tasks can be handled by relatively simple algorithms. Complex domains present difficulties that can be approached by combining the strengths of several complementing learning techniques, and overcoming their weaknesses by providing alternative learning strategies. This study presents two perspectives, the macro and the micro, for viewing the issue of multistrategy learning. The macro perspective deals with the decomposition of an overall complex learning task into relatively well-defined learning tasks, and the micro perspective deals with designing multistrategy learning techniques for supporting the acquisition of knowledge for each task. The two perspectives are discussed in the context of I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,266
test
1-hop neighbor's text information: Bumptrees for Efficient Function, Constraint, and Classification Learning, : A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. 1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990. Target text information: Fast Bounded Smooth Regression with Lazy Neural Trees: We propose the lazy neural tree (LNT) as the appropriate architecture for the realization of smooth regression systems. The LNT is a hybrid of a decision tree and a neural network. From the neural network it inherits smoothness of the generated function, incremental adaptability, and conceptual simplicity. From the decision tree it inherits the topology and initial parameter setting as well as a very efficient sequential implementation that out-performs traditional neural network simulations by the order of magnitudes. The enormous speed is achieved by lazy evaluation. A further speed-up can be obtained by the application of a window-ing scheme if the region of interesting results is restricted. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
678
test
1-hop neighbor's text information: BLIND SEPARATION OF REAL WORLD AUDIO SIGNALS USING OVERDETERMINED MIXTURES: We discuss the advantages of using overdetermined mixtures to improve upon blind source separation algorithms that are designed to extract sound sources from acoustic mixtures. A study of the nature of room impulse responses helps us choose an adaptive filter architecture. We use ideal inverses of acquired room impulse responses to compare the effectiveness of different-sized separating filter configurations of various filter lengths. Using a multi-channel blind least-mean-square algorithm (MBLMS), we show that, by adding additional sensors, we can improve upon the separation of signals mixed with real world filters. 1-hop neighbor's text information: Blind separation of delayed sources based on information maximisation, : Blind separation of independent sources from their convolutive mixtures is a problem in many real world multi-sensor applications. In this paper we present a solution to this problem based on the information maximization principle, which was recently proposed by Bell and Sejnowski for the case of blind separation of instantaneous mixtures. We present a feedback network architecture capable of coping with convolutive mixtures, and we derive the adaptation equations for the adaptive filters in the network by maximizing the information transferred through the network. Examples using speech signals are presented to illustrate the algorithm. 1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to [email protected]. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. Target text information: Blind separation of delayed and convolved sources. : We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated data, these rules fail in real rooms which usually involve non-minimum phase transfer functions, not-invertible using stable IIR filters. An approach that sidesteps this problem is to perform infomax on a feedforward architecture in the frequency domain (Lambert 1996). We demonstrate real-room separation of two natural signals using this approach. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,494
test
1-hop neighbor's text information: Learning to integrate multiple knowledge sources for case-based reasoning. : The case-based reasoning process depends on multiple overlapping knowledge sources, each of which provides an opportunity for learning. Exploiting these opportunities requires not only determining the learning mechanisms to use for each individual knowledge source, but also how the different learning mechanisms interact and their combined utility. This paper presents a case study examining the relative contributions and costs involved in learning processes for three different knowledge sources|cases, case adaptation knowledge, and similarity information|in a case-based planner. It demonstrates the importance of interactions between different learning processes and identifies a promising method for integrating multiple learning methods to improve case-based reasoning. 1-hop neighbor's text information: Acquiring case adaptation knowledge: A hybrid approach. : The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance. 1-hop neighbor's text information: Constructive similarity assessment: Using stored cases to define new situa tions. : A fundamental issue in case-based reasoning is similarity assessment: determining similarities and differences between new and retrieved cases. Many methods have been developed for comparing input case descriptions to the cases already in memory. However, the success of such methods depends on the input case description being sufficiently complete to reflect the important features of the new situation, which is not assured. In case-based explanation of anomalous events during story understanding, the anomaly arises because the current situation is incompletely understood; consequently, similarity assessment based on matches between known current features and old cases is likely to fail because of gaps in the current case's description. Our solution to the problem of gaps in a new case's description is an approach that we call constructive similarity assessment. Constructive similarity assessment treats similarity assessment not as a simple comparison between fixed new and old cases, but as a process for deciding which types of features should be investigated in the new situation and, if the features are borne out by other knowledge, added to the description of the current case. Constructive similarity assessment does not merely compare new cases to old: using prior cases as its guide, it dynamically carves augmented descriptions of new cases out of memory. Target text information: Case-based similarity assessment: Estimating adaptability from experience. : Case-based problem-solving systems rely on similarity assessment to select stored cases whose solutions are easily adaptable to fit current problems. However, widely-used similarity assessment strategies, such as evaluation of semantic similarity, can be poor predictors of adaptability. As a result, systems may select cases that are difficult or impossible for them to adapt, even when easily adaptable cases are available in memory. This paper presents a new similarity assessment approach which couples similarity judgments directly to a case library containing the system's adaptation knowledge. It examines this approach in the context of a case-based planning system that learns both new plans and new adaptations. Empirical tests of alternative similarity assessment strategies show that this approach enables better case selection and increases the benefits accrued from learned adaptations. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
67
test
1-hop neighbor's text information: Warmuth "How to use expert advice", : We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes. 1-hop neighbor's text information: On-line learning of linear functions. : We present an algorithm for the on-line learning of linear functions which is optimal to within a constant factor with respect to bounds on the sum of squared errors for a worst case sequence of trials. The bounds are logarithmic in the number of variables. Furthermore, the algorithm is shown to be optimally robust with respect to noise in the data (again to within a constant factor). Key words. Machine learning; computational learning theory; on-line learning; linear functions; worst-case loss bounds; adaptive filter theory. Subject classifications. 68T05. 1-hop neighbor's text information: Long. The learning complexity of smooth functions of a single variable. : We study the on-line learning of classes of functions of a single real variable formed through bounds on various norms of functions' derivatives. We determine the best bounds obtainable on the worst-case sum of squared errors (also "absolute" errors) for several such classes. We prove upper bounds for these classes of smooth functions for other loss functions, and prove upper and lower bounds in terms of the number of trials. Target text information: Worst-case quadratic loss bounds for on-line prediction of linear functions by gradient descent. : In this paper we study the performance of gradient descent when applied to the problem of on-line linear prediction in arbitrary inner product spaces. We show worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the sequence to predict. The algorithms we use are variants and extensions of on-line gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of on-line prediction for classes of smooth functions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,106
test
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. 1-hop neighbor's text information: Software for ANN training on a ring array processor. : The design and implementation of software for the Ring Array Processor (RAP), a high performance parallel computer, involved development for three hardware platforms: Sun SPARC workstations, Heurikon MC68020 boards running the VxWorks real-time operating system, and Texas Instruments TMS320C30 DSPs. The RAP now runs in Sun workstations under UNIX and in a VME based system using VxWorks. A flexible set of tools has been provided both to the RAP user and programmer. Primary emphasis has been placed on improving the efficiency of layered artificial neural network algorithms. This was done by providing a library of assembly language routines, some of which use node-custom compilation. An object-oriented RAP interface in C++ is provided that allows programmers to incorporate the RAP as a computational server into their own UNIX applications. For those not wishing to program in C++, a command interpreter has been built that provides interactive and shell-script style RAP manipulation. Target text information: Learning topology-preserving maps using self-supervised backpropagation. : Self-supervised backpropagation is an unsupervised learning procedure for feedfor-ward networks, where the desired output vector is identical with the input vector. For backpropagation, we are able to use powerful simulators running on parallel machines. Topology-preserving maps, on the other hand, can be developed by a variant of the competitive learning procedure. However, in a degenerate case, self-supervised backpropagation is a version of competitive learning. A simple extension of the cost function of backpropagation leads to a competitive version of self-supervised backpropagation, which can be used to produce topographic maps. We demonstrate the approach applied to the Traveling Salesman Problem (TSP). The algorithm was implemented using the backpropagation simulator (CLONES) on a parallel machine (RAP). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,664
val
1-hop neighbor's text information: "Case-based Reactive Navigation: A case-based method for on-line selection and adaptation of reactive control parameters in autonomous robotic systems", : This article presents a new line of research investigating on-line learning mechanisms for autonomous intelligent agents. We discuss a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system, and provides more flexible performance in novel environments without extensive high-level reasoning that would otherwise slow the system down. The method is implemented in the ACBARR (A Case-BAsed Reactive Robotic) system, and evaluated through empirical simulation of the system on several different environments, including "box canyon" environments known to be problematic for reactive control systems in general. fl Technical Report GIT-CC-92/57, College of Computing, Georgia Institute of Technology, Atlanta, Geor gia, 1992. Target text information: What kind of adaptation do CBR systems need? a review of current practice. : This paper reviews a large number of CBR systems to determine when and what sort of adaptation is currently used. Three taxonomies are proposed: an adaptation-relevant taxonomy of CBR systems, a taxonomy of the tasks performed by CBR systems and a taxonomy of adaptation knowledge. To the extent that the set of existing systems reflects constraints on what is feasible, this review shows interesting dependencies between different system-types, the tasks these systems achieve and the adaptation needed to meet system goals. The CBR system designer may find the partition of CBR systems and the division of adaptation knowledge suggested by this paper useful. Moreover, this paper may help focus the initial stages of systems development by suggesting (on the basis of existing work) what types of adaptation knowledge should be supported by a new system. In addition, the paper provides a framework for the preliminary evaluation and comparison of systems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,704
test
1-hop neighbor's text information: "Using genetic algorithms to explore pattern recognition in the immune system," : We describe an immune system model based on a universe of binary strings. The model is directed at understanding the pattern recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of our model. In the paper we study the behavior of the GA on two pattern recognition problems that are relevant to natural immune systems. Finally, we compare our model with explicit fitness sharing techniques for genetic algorithms, and show that our model implements a form of implicit fitness sharing. 1-hop neighbor's text information: Efficient reinforcement learning through symbiotic evolution. : This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, efficient genetic search and discourages convergence to suboptimal solutions. In the inverted pendulum problem, SANE formed effective networks 9 to 16 times faster than the Adaptive Heuristic Critic and 2 times faster than Q-learning and the GENITOR neuro-evolution approach without loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications. 1-hop neighbor's text information: A cooperative coevolutionary approach to function optimization. : A general model for the coevolution of cooperating species is presented. This model is instantiated and tested in the domain of function optimization, and compared with a traditional GA-based function optimizer. The results are encouraging in two respects. They suggest ways in which the performance of GA and other EA-based optimizers can be improved, and they suggest a new approach to evolving complex structures such as neural networks and rule sets. Target text information: "A Coevolutionary Approach to Learning Sequential Decision Rules", : We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,614
val
1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : 1-hop neighbor's text information: Learning control knowledge in KADS knowledge-based systems: Machine learning meets knowledge engineering. : Machine learning and knowledge engineering have always been strongly related, but the introduction of new representations in knowledge engineering has created a gap between them. This paper describes research aimed at applying machine learning techniques to the current knowledge engineering representations. We propose a system that redesigns a part of a knowledge based system, the so called control knowledge. We claim a strong similarity between redesign of knowledge based systems and incremental machine learning. Finally we will relate this work to existing research. 1-hop neighbor's text information: Introspective Reasoning using Meta-Explanations for Multistrat-egy Learning. : In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system's knowledge, and of the organization of this knowledge. This chapter presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task. Target text information: Learning problem-solving concepts by reflecting on problem solving. : Learning and problem solving are intimately related: problem solving determines the knowledge requirements of the reasoner which learning must fulfill, and learning enables improved problem-solving performance. Different models of problem solving, however, recognize different knowledge needs, and, as a result, set up different learning tasks. Some recent models analyze problem solving in terms of generic tasks, methods, and subtasks. These models require the learning of problem-solving concepts such as new tasks and new task decompositions. We view reflection as a core process for learning these problem-solving concepts. In this paper, we identify the learning issues raised by the task-structure framework of problem solving. We view the problem solver as an abstract device, and represent how it works in terms of a structure-behavior-function model which specifies how the knowledge and reasoning of the problem solver results in the accomplishment of its tasks. We describe how this model enables reflection, and how model-based reflection enables the reasoner to adapt its task structure to produce solutions of better quality. The Autognostic system illustrates this reflection process. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,159
val
1-hop neighbor's text information: On the sample complexity of learning Bayesian networks. : In recent years there has been an increasing interest in learning Bayesian networks from data. One of the most effective methods for learning such networks is based on the minimum description length (MDL) principle. Previous work has shown that this learning procedure is asymptotically successful: with probability one, it will converge to the target distribution, given a sufficient number of samples. However, the rate of this convergence has been hitherto unknown. In this work we examine the sample complexity of MDL based learning procedures for Bayesian networks. We show that the number of samples needed to learn an *-close approximation (in terms of entropy distance) with confidence ffi is O * ) 3 log 1 ffi log log 1 . This means that the sample complexity is a low-order polynomial in the error threshold and sub-linear in the confidence bound. We also discuss how the constants in this term depend on the complexity of the target distribution. Finally, we address questions of asymptotic minimality and propose a method for using the sample complexity results to speed up the learning process. 1-hop neighbor's text information: Context-specific independence in Bayesian networks. : Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation schemetree-structured CPTs for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning. 1-hop neighbor's text information: Sequential update of Bayesian network structure. : There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a fixed structure can be accomplished using standard techniques, sequential update of network structure is still an open problem. In this paper, we investigate sequential update of Bayesian networks were both parameters and structure are expected to change. We introduce a new approach that allows for the flexible manipulation of the tradeoff between the quality of the learned networks and the amount of information that is maintained about past observations. We formally describe our approach including the necessary modifications to the scoring functions for learning Bayesian networks, evaluate its effectiveness through and empirical study, and extend it to the case of missing data. Target text information: A Bayesian approach to learning Bayesian networks with local structure. : In this paper we examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability tables (CPTs), that quantify these networks. This increases the space of possible models, enabling the representation of CPTs with a variable number of parameters that depends on the learned local structures. The resulting learning procedure is capable of inducing models that better emulate the real complexity of the interactions present in the data. We describe the theoretical foundations and practical aspects of learning local structures, as well as an empirical evaluation of the proposed method. This evaluation indicates that learning curves characterizing the procedure that exploits the local structure converge faster than these of the standard procedure. Our results also show that networks learned with local structure tend to be more complex (in terms of arcs), yet require less parameters. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,463
val
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. 1-hop neighbor's text information: Spurious Solutions to the Bellman Equation: Reinforcement learning algorithms often work by finding functions that satisfy the Bellman equation. This yields an optimal solution for prediction with Markov chains and for controlling a Markov decision process (MDP) with a finite number of states and actions. This approach is also frequently applied to Markov chains and MDPs with infinite states. We show that, in this case, the Bellman equation may have multiple solutions, many of which lead to erroneous predictions and policies (Baird, 1996). Algorithms and conditions are presented that guarantee a single, optimal solution to the Bellman equation. 1-hop neighbor's text information: Generalization in reinforcement learning: Safely approximating the value function. : To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization. Target text information: Baird (1995). Residual algorithms: Reinforcement learning with function approximation. : A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basis-function system, a memory-based learning system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual, guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual algorithms, and it is shown that residual algorithms can combine the advantages of each approach. The direct, residual gradient, and residual forms of value iteration, Q-learning, and advantage learning are all presented. Theoretical analysis is given explaining the properties these algorithms have, and simulation results are given that demonstrate these properties. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,507
test
1-hop neighbor's text information: Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis, : We present new results, both positive and negative, on the well-studied problem of learning disjunctive normal form (DNF) expressions. We first prove that an algorithm due to Kushilevitz and Mansour [16] can be used to weakly learn DNF using membership queries in polynomial time, with respect to the uniform distribution on the inputs. This is the first positive result for learning unrestricted DNF expressions in polynomial time in any nontrivial formal model of learning. It provides a sharp contrast with the results of Kharitonov [15], who proved that AC 0 is not efficiently learnable in the same model (given certain plausible cryptographic assumptions). We also present efficient learning algorithms in various models for the read-k and SAT-k subclasses of DNF. For our negative results, we turn our attention to the recently introduced statistical query model of learning [11]. This model is a restricted version of the popular Probably Approximately Correct (PAC) model [23], and practically every class known to be efficiently learnable in the PAC model is in fact learnable in the statistical query model [11]. Here we give a general characterization of the complexity of statistical query learning in terms of the number of uncorrelated functions in the concept class. This is a distribution-dependent quantity yielding upper and lower bounds on the number of statistical queries required for learning on any input distribution. As a corollary, we obtain that DNF expressions and decision trees are not even weakly learnable with fl This research is sponsored in part by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. Support also is sponsored by the National Science Foundation under Grant No. CC-9119319. Blum also supported in part by NSF National Young Investigator grant CCR-9357793. Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of Wright Laboratory or the United States Government, or NSF. respect to the uniform input distribution in polynomial time in the statistical query model. This result is information-theoretic and therefore does not rely on any unproven assumptions. It demonstrates that no simple modification of the existing algorithms in the computational learning theory literature for learning various restricted forms of DNF and decision trees from passive random examples (and also several algorithms proposed in the experimental machine learning communities, such as the ID3 algorithm for decision trees [22] and its variants) will solve the general problem. The unifying tool for all of our results is the Fourier analysis of a finite class of boolean functions on the hypercube. 1-hop neighbor's text information: Learning Boolean read-once formulas with arbitrary symmetric and constant fan-in gates. : A read-once formula is a boolean formula in which each variable occurs at most once. Such formulas are also called -formulas or boolean trees. This paper treats the problem of exactly identifying an unknown read-once formula using specific kinds of queries. The main results are a polynomial time algorithm for exact identification of monotone read-once formulas using only membership queries, and a polynomial time algorithm for exact identification of general read-once formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). Our results improve on Valiant's previous results for read-once formulas [26]. We also show that no polynomial time algorithm using only membership queries or only equivalence queries can exactly identify all read-once formulas. 1-hop neighbor's text information: Learning with queries but incomplete information. : We investigate learning with membership and equivalence queries assuming that the information provided to the learner is incomplete. By incomplete we mean that some of the membership queries may be answered by I don't know. This model is a worst-case version of the incomplete membership query model of Angluin and Slonim. It attempts to model practical learning situations, including an experiment of Lang and Baum that we describe, where the teacher may be unable to answer reliably some queries that are critical for the learning algorithm. We present algorithms to learn monotone k-term DNF with membership queries only, and to learn monotone DNF with membership and equivalence queries. Compared to the complete information case, the query complexity increases by an additive term linear in the number of I don't know answers received. We also observe that the blowup in the number of queries can in general be exponential for both our new model and the incomplete membership model. Target text information: Learning conjunctions of Horn clauses. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,299
test
1-hop neighbor's text information: Tracking drifting concepts by minimizing disagreements. : In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy *. Furthermore, the complexity of the class H of possible targets, as measured by d, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the number of disagreements with a sample from among concepts in a class H can be approximated to within a factor k, then there is a simple tracking algorithm for H which can achieve a probability * of making a mistake if the target movement rate is at most a constant times * 2 =(k(d + k) ln 1 * ), where d is the Vapnik-Chervonenkis dimension of H. Also, we show that if H is properly PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking algorithm for H which tolerates drift rates up to a constant times * 2 =(d 2 ln 1 In addition, we prove complementary results for the classes of halfspaces and axis-aligned hy perrectangles showing that the maximum rate of drift that any algorithm (even with unlimited 1-hop neighbor's text information: Cognitive Computation (Extended Abstract): Cognitive computation is discussed as a discipline that links together neurobiology, cognitive psychology and artificial intelligence. 1-hop neighbor's text information: Cryptographic limitations on learning boolean formulae and finite automata. : In this paper we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses. Our methods reduce the problems of cracking a number of well-known public-key cryptosys- tems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory: in particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography. We also apply our results to obtain strong intractability results for approximating a gener - alization of graph coloring. fl This research was conducted while the author was at Harvard University and supported by an A.T.& T. Bell Laboratories scholarship. y Supported by grants ONR-N00014-85-K-0445, NSF-DCR-8606366 and NSF-CCR-89-02500, DAAL03-86-K-0171, DARPA AFOSR 89-0506, and by SERC. Target text information: Toward efficient agnostic learning. : In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,896
val
1-hop neighbor's text information: Active learning with committees for text categorization. : In many real-world domains, supervised learning requires a large number of training examples. In this paper, we describe an active learning method that uses a committee of learners to reduce the number of training examples required for learning. Our approach is similar to the Query by Committee framework, where disagreement among the committee members on the predicted label for the input part of the example is used to signal the need for knowing the actual value of the label. Our experiments are conducted in the text categorization domain, which is characterized by a large number of features, many of which are irrelevant. We report here on experiments using a committee of Winnow-based learners and demonstrate that this approach can reduce the number of labeled training examples required over that used by a single Winnow learner by 1-2 orders of magnitude. 1-hop neighbor's text information: Query by Committee, : We propose an algorithm called query by committee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms. Target text information: Selective Sampling Using the Query by Committee Algorithm, : We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons. Keywords: selective sampling, query learning, Bayesian Learning, experimental design fl Yoav Freund, Room 2B-428, AT&T Bell Laboratories, 600 Mountain Ave., Murray Hill, NJ, 07974. Telephone:908-582-3164. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,590
val
1-hop neighbor's text information: Structured Reachability Analysis for Markov Decision Processes: Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Using compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accuracy, produce structured descriptions of (estimated) reachable states that can be used to eliminate variables or variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action representations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs. Target text information: Approximating value trees in structured dynamic programming. : We propose and examine a method of approximate dynamic programming for Markov decision processes based on structured problem representations. We assume an MDP is represented using a dynamic Bayesian network, and construct value functions using decision trees as our function representation. The size of the representation is kept within acceptable limits by pruning these value trees so that leaves represent possible ranges of values, thus approximating the value functions produced during optimization. We propose a method for detecting convergence, prove errors bounds on the resulting approximately optimal value functions and policies, and describe some preliminary experi mental results. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,411
test
1-hop neighbor's text information: A Knowledge-Based Framework for Belief Change, Part II: Revision and Update. : The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [FH94b] we introduced a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the changes of beliefs over time. In this paper we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method and to better understand the principles underlying them. In particular, it allows us to understand the source of Gardenfors' triviality result for belief revision [Gar86] and suggests a way of mitigating the problem. It also shows that Katsuno and Mendelzon's notion of belief update [KM91a] depends on several strong assumptions that may limit its applicability in AI. 1-hop neighbor's text information: Plausibility Measures: A User's Guide: We examine a new approach to modeling uncertainty based on plausibility measures, where a plausibility measure just associates with an event its plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a plausibility measure makes it easy for us to add structure on an as needed basis, letting us examine what is required to ensure that a plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their algebraic properties, analogues to the use of + and fi in probability theory. An understanding of such properties will be essential if plausibility measures are to be used in practice as a representation tool. 1-hop neighbor's text information: On the semantics of belief revision systems. : We consider belief revision operators that satisfy the Alchourron-Gardenfors-Makinson postulates, and present an epistemic logic in which, for any such revision operator, the result of a revision can be described by a sentence in the logic. In our logic, the fact that the agent's set of beliefs is is represented by the sentence O, where O is Levesque's `only know' operator. Intuitively, O is read as ` is all that is believed.' The fact that the agent believes is represented by the sentence B , read in the usual way as ` is believed'. The connective represents update as defined by Katsuno and Mendelzon. The revised beliefs are represented by the sentence O B . We show that for every revision operator that satisfies the AGM postulates, there is a model for our epistemic logic such that the beliefs implied by the sentence O B in this model correspond exactly to the sentences implied by the theory that results from revising by . This means that reasoning about changes in the agent's beliefs reduces to model checking of certain epistemic sentences. The negative result in the paper is that this type of formal account of revision cannot be extended to the situation where the agent is able to reason about its beliefs. A fully introspective agent cannot use our construction to reason about the results of its own revisions, on pain of triviality. Target text information: Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. : We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,319
test
1-hop neighbor's text information: A heuristic approach to the discovery of macro-operators. : The negative effect is naturally more significant in the more complex domain. The graph for the simple domain crosses the 0 line earlier than the complex domain. That means that learning starts to be useful with weight greater than 0.6 for the simple domain and 0.7 for the complex domain. As we relax the optimality requirement more s i g n i f i c a n t l y ( w i t h a W = 0.8), macro usage in the more complex domain becomes more advantageous. The purpose of the research described in this paper is to identify the parameters that effects deductive learning and to perform experiments systematically in order to understand the nature of those effects. The goal of this paper is to demonstrate the methodology of performing parametric experimental study of deductive learning. The example here include the study of two parameters: the point on the satisficing-optimizing scale that is used during the search carried out during problem solving time and during learning time. We showed that A*, which looks for optimal solutions, cannot benefit from macro learning but as the strategy comes closer to best-first (satisficing search), the utility of macros increases. We also demonstrated that deductive learners that learn offline by solving training problems are sensitive to the type of search used during the learning. We showed that in general optimizing search is best for learning. It generates macros that increase the quality solutions regardless of the search method used during problem solving. It also improves the efficiency for problem solvers that require a high level of optimality. The only drawback in using optimizing search is the increase in learning resources spent. We are aware of the fact that the results described here are not very surprising. The goal of the parametric study is not necessarily to find exciting results, but to obtain results, sometimes even previously known, in a controlled experimental environment. The work described here is only part of our research plan. We are currently in the process of extensive experimentation with all the parameters described here and also with others. We also intend to test the validity of the conclusions reached during the study by repeating some of the tests in several of the commonly known search problems. We hope that such systematic experimentation will help the research community to better understand the process of deductive learning and will serve as a demonstration of the experimental methodology that should be used in machine learning research. 1-hop neighbor's text information: Learning se-mantic grammars with constructive inductive logic programming. : Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semantic-grammar acquisition problem can be viewed as the learning of search-control heuristics in a logic program. Appropriate control rules are learned using a new first-order induction algorithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and out-perform previous approaches based on connectionist techniques. 1-hop neighbor's text information: "An analysis of bayesian classifiers," : In this paper we present an average-case analysis of the Bayesian classifier, a simple probabilistic induction algorithm that fares remarkably well on many learning tasks. Our analysis assumes a monotone conjunctive target concept, Boolean attributes that are independent of each other and that follow a single distribution, and the absence of attribute noise. We first calculate the probability that the algorithm will induce an arbitrary pair of concept descriptions; we then use this expression to compute the probability of correct classification over the space of instances. The analysis takes into account the number of training instances, the number of relevant and irrelevant attributes, the distribution of these attributes, and the level of class noise. In addition, we explore the behavioral implications of the analysis by presenting predicted learning curves for a number of artificial domains. We also give experimental results on these domains as a check on our reasoning. Finally, we discuss some unresolved questions about the behavior of Bayesian classifiers and outline directions for future research. Note: Without acknowledgements and references, this paper fits into 12 pages with dimensions 5.5 inches fi 7.5 inches using 12 point LaTeX type. However, we find the current format more desirable. We have not submitted the paper to any other conference or journal. Target text information: Computational Learning in Humans and Machines: In this paper we review research on machine learning and its relation to computational models of human learning. We focus initially on concept induction, examining five main approaches to this problem, then consider the more complex issue of learning sequential behaviors. After this, we compare the rhetoric that sometimes appears in the machine learning and psychological literature with the growing evidence that different theoretical paradigms typically produce similar results. In response, we suggest that concrete computational models, which currently dominate the field, may be less useful than simulations that operate at a more abstract level. We illustrate this point with an abstract simulation that explains a challenging phenomenon in the area of category learning, and we conclude with some general observations about such abstract models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
237
test
1-hop neighbor's text information: "Evolution in Time and Space: The Parallel Genetic Algorithm." In Foundations of Genetic Algorithms, : The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: A Study of Genetic Algorithms to Find Approximate Solutions to Hard 3CNF Problems: Genetic algorithms have been used to solve hard optimization problems ranging from the Travelling Salesman problem to the Quadratic Assignment problem. We show that the Simple Genetic Algorithm can be used to solve an optimization problem derived from the 3-Conjunctive Normal Form problem. By separating the populations into small sub-populations, parallel genetic algorithms exploits the inherent parallelism in genetic algorithms and prevents premature convergence. Genetic algorithms using hill-climbing conduct genetic search in the space of local optima, and hill-climbing can be less com-putationally expensive than genetic search. We examine the effectiveness of these techniques in improving the quality of solutions of 3CNF problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
486
test
1-hop neighbor's text information: Cortical Mechanisms of Visual Recognition and Learning: A Hierarchical Kalman Filter Model: We describe a biologically plausible model of dynamic recognition and learning in the visual cortex based on the statistical theory of Kalman filtering from optimal control theory. The model utilizes a hierarchical network whose successive levels implement Kalman filters operating over successively larger spatial and temporal scales. Each hierarchical level in the network predicts the current visual recognition state at a lower level and adapts its own recognition state using the residual error between the prediction and the actual lower-level state. Simultaneously, the network also learns an internal model of the spatiotemporal dynamics of the input stream by adapting the synaptic weights at each hierarchical level in order to minimize prediction errors. The Kalman filter model respects key neuroanatomical data such as the reciprocity of connections between visual cortical areas, and assigns specific computational roles to the inter-laminar connections known to exist between neurons in the visual cortex. Previous work elucidated the usefulness of this model in explaining neurophysiological phenomena such as endstopping and other related extra-classical receptive field effects. In this paper, in addition to providing a more detailed exposition of the model, we present a variety of experimental results demonstrating the ability of this model to perform robust spatiotemporal segmentation and recognition of objects and image sequences in the presence of varying amounts of occlusion, background clutter, and noise. Target text information: The wake-sleep algorithm for unsupervised neural networks. : An unsupervised learning algorithm for a multilayer network of stochastic neurons is described. Bottom-up recognition connections convert the input into representations in successive hidden layers and top-down generative connections reconstruct the representation in one layer from the representation in the layer above. In the wake phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the sleep phase, neurons are driven by generative connections and recognition connections are adapted to increase the probability that they would produce Supervised learning algorithms for multilayer neural networks face two problems: They require a teacher to specify the desired output of the network and they require some method of communicating error information to all of the connections. The wake-sleep algorithm avoids both these problems. When there is no external teaching signal to be matched, some other goal is required to force the hidden units to extract underlying structure. In the wake-sleep algorithm the goal is to learn representations that are economical to describe but allow the input to be reconstructed accurately. We can quantify this goal by imagining a communication game in which each vector of raw sensory inputs is communicated to a receiver by first sending its hidden representation and then sending the difference between the input vector and its top-down reconstruction from the hidden representation. The aim of learning is to minimize the description length which is the total number of bits that would be required to communicate the input vectors in this way [1]. No communication actually takes place, but minimizing the description length that would be required forces the network to learn economical representations that capture the underlying regularities in the data [2]. the correct activity vector in the layer above. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,446
val
1-hop neighbor's text information: Probabilistic instance-based learning. : Traditional instance-based learning methods base their predictions directly on (training) data that has been stored in the memory. The predictions are based on weighting the contributions of the individual stored instances by a distance function implementing a domain-dependent similarity metrics. This basic approach suffers from three drawbacks: com-putationally expensive prediction when the database grows large, overfitting in the presence of noisy data, and sensitivity to the selection of a proper distance function. We address all these issues by giving a probabilistic interpretation to instance-based learning, where the goal is to approximate predictive distributions of the attributes of interest. In this probabilistic view the instances are not individual data items but probability distributions, and we perform Bayesian inference with a mixture of such prototype distributions. We demonstrate the feasibility of the method empirically for a wide variety of public domain classification data sets. 1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. 1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained. Target text information: Using Neural Networks for Descriptive Statistical Analysis of Educational Data: In this paper we discuss the methodological issues of using a class of neural networks called Mixture Density Networks (MDN) for discriminant analysis. MDN models have the advantage of having a rigorous probabilistic interpretation, and they have proven to be a viable alternative as a classification procedure in discrete domains. We will address both the classification and interpretive aspects of discriminant analysis, and compare the approach to the traditional method of linear discrimin- ants as implemented in standard statistical packages. We show that the MDN approach adopted performs well in both aspects. Many of the observations made are not restricted to the particular case at hand, and are applicable to most applications of discriminant analysis in educational research. fl URL: http://www.cs.Helsinki.FI/research/cosco/ I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
109
test
1-hop neighbor's text information: `Machine learning in prognosis of the femoral neck fracture recovery\', : We compare the performance of several machine learning algorithms in the problem of prognos-tics of the femoral neck fracture recovery: the K-nearest neighbours algorithm, the semi-naive Bayesian classifier, backpropagation with weight elimination learning of the multilayered neural networks, the LFC (lookahead feature construction) algorithm, and the Assistant-I and Assistant-R algorithms for top down induction of decision trees using information gain and RELIEFF as search heuristics, respectively. We compare the prognostic accuracy and the explanation ability of different classifiers. Among the different algorithms the semi-naive Bayesian classifier and Assistant-R seem to be the most appropriate. We analyze the combination of decisions of several classifiers for solving prediction problems and show that the combined classifier improves both performance and the explanation ability. 1-hop neighbor's text information: `Non-myopic attribute estimation in regression\', : One of key issues in both discrete and continuous class prediction and in machine learning in general seems to be the problem of estimating the quality of attributes. Heuristic measures mostly assume independence of attributes and therefore cannot be successfully used in domains with strong dependencies between attributes. Relief and its extension ReliefF are statistical methods capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. Following the analysis of ReliefF we have extended it to continuous class problems. Regressional ReliefF (RReliefF) and ReliefF provide a unified view on estimation of quality of attributes. The experiments show that RReliefF successfully estimates the quality of attributes and can be used for non-myopic learning of regression trees. 1-hop neighbor's text information: Estimating attributes: Analysis and extension of relief. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem. Target text information: Prognosing the Survival Time of the Patients with the Anaplastic Thyroid Carcinoma with Machine Learning: Anaplastic thyroid carcinoma is a rare but very aggressive tumor. Many factors that might influence the survival of patients have been suggested. The aim of our study was to determine which of the factors, known at the time of admission to the hospital, might predict survival of patients with anaplastic thyroid carcinoma. Our aim was also to assess the relative importance of the factors and to identify potentially useful decision and regression trees generated by machine learning algorithms. Our study included 126 patients (90 females and 36 males; mean age was 66.7 years) with anaplastic thyroid carcinoma treated at the Institute of Oncology Ljubljana from 1972 to 1992. Patients were classified into categories according to 11 attributes: sex, age, history, physical findings, extent of disease on admission, and tumor morphology. In this paper we compare the machine learning approach with the previous statistical evaluations on the problem (uni-variate and multivariate analysis) and show that it can provide more thorough analysis and improve understanding of the data. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
528
test
1-hop neighbor's text information: Structured Representation of Complex Stochastic Systems: This paper considers the problem of representing complex systems that evolve stochastically over time. Dynamic Bayesian networks provide a compact representation for stochastic processes. Unfortunately, they are often unwieldy since they cannot explicitly model the complex organizational structure of many real life systems: the fact that processes are typically composed of several interacting subprocesses, each of which can, in turn, be further decomposed. We propose a hierarchically structured representation language which extends both dynamic Bayesian networks and the object-oriented Bayesian network framework of [9], and show that our language allows us to describe such systems in a natural and modular way. Our language supports a natural representation for certain system characteristics that are hard to capture using more traditional frameworks. For example, it allows us to represent systems where some processes evolve at a different rate than others, or systems where the processes interact only intermittently. We provide a simple inference mechanism for our representation via translation to Bayesian networks, and suggest ways in which the inference algorithm can exploit the additional structure encoded in our representation. 1-hop neighbor's text information: Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks: We present an algorithm for arc reversal in Bayesian networks with tree-structured conditional probability tables, and consider some of its advantages, especially for the simulation of dynamic probabilistic networks. In particular, the method allows one to produce CPTs for nodes involved in the reversal that exploit regularities in the conditional distributions. We argue that this approach alleviates some of the overhead associated with arc reversal, plays an important role in evidence integration and can be used to restrict sampling of variables in DPNs. We also provide an algorithm that detects the dynamic irrelevance of state variables in forward simulation. This algorithm exploits the structured CPTs in a reversed network to determine, in a time-independent fashion, the conditions under which a variable does or does not need to be sampled. 1-hop neighbor's text information: Poole (1997). A constraint-based approach to preference elicitation and decision making. : We investigate the solution of constraint-based configuration problems in which the preference function over outcomes is unknown or incompletely specified. The aim is to configure a system, such as a personal computer, so that it will be optimal for a given user. The goal of this project is to develop algorithms that generate the most preferred feasible configuration by posing preference queries to the user. In order to minimize the number and the complexity of preference queries posed to the user, the algorithm reasons about the user's preferences while taking into account constraints over the set of feasible configurations. We assume that the user can structure their preferences in a particular way that, while natural in many settings, can be exploited during the optimization process. We also address in a preliminary fashion the trade-offs between computational effort in the solution of a problem and the degree of interaction with the user. Target text information: Context-specific independence in Bayesian networks. : Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation schemetree-structured CPTs for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,697
test
1-hop neighbor's text information: Theory refinement combining analytical and empirical methods. : This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis. 1-hop neighbor's text information: Creating advice-taking reinforcement learners. : Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple imperative programming language. Based on techniques from knowledge-based neural networks, we insert these programs directly into the agent's utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that investigates several aspects of our approach and show that, given good advice, a learner can achieve statistically significant gains in expected reward. A second experiment shows that advice improves the expected reward regardless of the stage of training at which it is given, while another study demonstrates that subsequent advice can result in further gains in reward. Finally, we present experimental results that indicate our method is more powerful than a naive technique for making use of advice. 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. Target text information: Intelligent agents for Web-based tasks: An advice-taking approach. : We present and evaluate an implemented system with which to rapidly and easily build intelligent software agents for Web-based tasks. Our design is centered around two basic functions: ScoreThisLink and ScoreThisPage. If given highly accurate such functions, standard heuristic search would lead to efficient retrieval of useful information. Our approach allows users to tailor our system's behavior by providing approximate advice about the above functions. This advice is mapped into neural network implementations of the two functions. Subsequent reinforcements from the Web (e.g., dead links) and any ratings of retrieved pages that the user wishes to provide are, respectively, used to refine the link- and page-scoring functions. Hence, our architecture provides an appealing middle ground between nonadaptive agent programming languages and systems that solely learn user preferences from the user's ratings of pages. We describe our internal representation of Web pages, the major predicates in our advice language, how advice is mapped into neural networks, and the mechanisms for refining advice based on subsequent feedback. We also present a case study where we provide some simple advice and specialize our general-purpose system into a "home-page finder". An empirical study demonstrates that our approach leads to a more effective home-page finder than that of a leading commercial Web search site. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,694
val
1-hop neighbor's text information: "Extracting tree-structured representations of trained networks," : A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large net works and problems with high-dimensional input spaces. 1-hop neighbor's text information: Extracting Comprehensible Models from Trained Neural Networks. : Although they are applicable to a wide array of problems, and have demonstrated good performance on a number of difficult, real-world tasks, neural networks are not usually applied to problems in which comprehensibility of the acquired concepts is important. The concept representations formed by neural networks are hard to understand because they typically involve distributed, nonlinear relationships encoded by a large number of real-valued parameters. To address this limitation, we have been developing algorithms for extracting "symbolic" concept representations from trained neural networks. We first discuss why it is important to be able to understand the concept representations formed by neural networks. We then briefly describe our approach and discuss a number of issues pertaining to comprehensibility that have arisen in our work. Finally, we discuss choices that we have made in our research to date, and open research issues that we have not yet addressed. 1-hop neighbor's text information: Experiments with a New Boosting Algorithm. : In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. Target text information: Submitted to the Future Generation Computer Systems special issue on Data Mining. Using Neural Networks: Neural networks have been successfully applied in a wide range of supervised and unsupervised learning applications. Neural-network methods are not commonly used for data-mining tasks, however, because they often produce incomprehensible models and require long training times. In this article, we describe neural-network learning algorithms that are able to produce comprehensible models, and that do not require excessive training times. Specifically, we discuss two classes of approaches for data mining with neural networks. The first type of approach, often called rule extraction, involves extracting symbolic models from trained neural networks. The second approach is to directly learn simple, easy-to-understand networks. We argue that, given the current state of the art, neural-network methods deserve a place in the tool boxes of data-mining specialists. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
163
test
1-hop neighbor's text information: Colombetti (1994b). The role of the trainer in reinforcement learning. : In this paper we propose a threestage incremental approach to the development of autonomous agents. We discuss some issues about the characteristics which differentiate reinforcement programs (RPs), and define the trainer as a particular kind of RP. We present a set of results obtained running experiments with a trainer which provides guidance to the AutonoMouse, our mousesized autonomous robot. 1-hop neighbor's text information: "Genetic and Non-Genetic Operators in Alecsys," : It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficult to regulate interplay between the reward system and the background genetic algorithm (GA), rule chains instability, default hierarchies instability, are only a few. ALECSYS is a parallel version of a standard learning classifier system (CS), and as such suffers of these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec, a new genetic operator used to specialize potentially useful classifiers. Energy, a quantity introduced to measure global convergence in order to apply the genetic algorithm only when the system is close to a steady state. Dynamical adjustment of the classifiers set cardinality, in order to speed up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source. 1-hop neighbor's text information: Robot shaping: Developing autonomous agents though learning. : Learning plays a vital role in the development of situated agents. In this paper, we explore the use of reinforcement learning to "shape" a robot to perform a predefined target behavior. We connect both simulated and real robots to A LECSYS, a parallel implementation of a learning classifier system with an extended genetic algorithm. After classifying different kinds of Animat-like behaviors, we explore the effects on learning of different types of agent's architecture (monolithic, flat and hierarchical) and of training strategies. In particular, hierarchical architecture requires the agent to learn how to coordinate basic learned responses. We show that the best results are achieved when both the agent's architecture and the training strategy match the structure of the behavior pattern to be learned. We report the results of a number of experiments carried out both in simulated and in real environments, and show that the results of simulations carry smoothly to real robots. While most of our experiments deal with simple reactive behavior, in one of them we demonstrate the use of a simple and general memory mechanism. As a whole, our experimental activity demonstrates that classifier systems with genetic algorithms can be practically employed to develop autonomous agents. Target text information: Alecsys and the autonomouse: Learning to control a real robot by distributed classifier systems. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,023
test
1-hop neighbor's text information: Determining mental state from EEG signals using neural networks. : EEG analysis has played a key role in the modeling of the brain's cortical dynamics, but relatively little effort has been devoted to developing EEG as a limited means of communication. If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequences of these mental states. EEG pattern recognition is a difficult problem and hinges on the success of finding representations of the EEG signals in which the patterns can be distinguished. In this article, we report on a study comparing three EEG representations, the unprocessed signals, a reduced-dimensional representation using the Karhunen-Loeve transform, and a frequency-based representation. Classification is performed with a two-layer neural network implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc.. Execution time comparisons show over a hundred-fold speed up over a Sun Sparc 10. The best classification accuracy on untrained samples is 73% using the frequency-based representation. 1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. Target text information: EEG Signal Classification with Different Signal Representations for a large number of hidden units.: If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequencesof these mental states. In this article, we report on a study comparing four representations of EEG signals and their classification by a two-layer neural network with sigmoid activation functions. The neural network is implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc., gaining a 100-fold decrease in training time over a Sun I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
309
test
1-hop neighbor's text information: Case-based reasoning: Foundational issues, methodological variations, and system approaches. : 10 resources, Alan Schultz for installing a WWW server and providing knowledge on CGI scripts, and John Grefenstette for his comments on an earlier version of this paper. Target text information: SaxEx a case-based reasoning system for generating expressive musical performances: We have studied the problem of generating expressive musical performances in the context of tenor saxophone interpretations. We have done several recordings of a tenor sax playing different Jazz ballads with different degrees of expressiveness including an inexpressive interpretation of each ballad. These recordings are analyzed, using SMS spectral modeling techniques, to extract information related to several expressive parameters. This set of parameters and the scores constitute the set of cases (examples) of a case-based system. From this set of cases, the system infers a set of possible expressive transformations for a given new phrase applying similarity criteria, based on background musical knowledge, between this new phrase and the set of cases. Finally, SaxEx applies the inferred expressive transformations to the new phrase using the synthesis capabilities of SMS. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
252
test
1-hop neighbor's text information: Meter as Mechanism: A Neural Network that Learns Metrical Patterns: One kind of prosodic structure that apparently underlies both music and some examples of speech production is meter. Yet detailed measurements of the timing of both music and speech show that the nested periodicities that define metrical structure can be quite noisy in time. What kind of system could produce or perceive such variable metrical timing patterns? And what would it take to be able to store and reproduce particular metrical patterns from long-term memory? We have developed a network of coupled oscillators that both produces and perceives patterns of pulses that conform to particular meters. In addition, beginning with an initial state with no biases, it can learn to prefer the particular meter that it has been previously exposed to. Meter is an abstract structure in time based on the periodic recurrence of pulses, that is, on equal time intervals between distinct phase zeros. From this point of view, the simplest meter is a regular metronome pulse. But often there appear meters with two or three (or rarely even more) nested periodicities with integral frequency ratios. A hierarchy of such metrical structures is implied in standard Western musical notation, where different levels of the metrical hierarchy are indicated by kinds of notes (quarter notes, half notes, etc.) and by the bars separating measures with an equal number of beats. For example, in a basic waltz-time meter, there are individual beats, all with the same spacing, grouped into sets of three, with every third one receiving a stronger accent at its onset. In this meter there is a hierarchy consisting of both a faster periodic cycle (at the beat level) and a slower one (at the measure level) that is 1/3 as fast, with its onset (or zero phase angle) coinciding with the zero phase angle of every third beat. This essentially temporal view of meter contrasts with the traditional symbol-string theories (such as Hayes, 1981 for speech and Lerdahl and Jackendoff, 1983 for music). Metrical systems, however they are defined, seem to underlie most of what we call music. Indeed, an expanded version of European musical notation is found to be practical for transcribing most music from around the world. That is, most forms of music employ nested periodic temporal patterns (Titon, Fujie, & Locke, 1996). Musical notation has 1-hop neighbor's text information: Resonance and the perception of musical meter. : Many connectionist approaches to musical expectancy and music composition let the question of What next? overshadow the equally important question of When next?. One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listeners internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy. 1-hop neighbor's text information: On the perception of time as phase: Toward an adaptive-oscillator model of rhythm. : Target text information: Representing rhythmic patterns in a network of oscillators. : This paper describes an evolving computational model of the perception and production of simple rhythmic patterns. The model consists of a network of oscillators of different resting frequencies which couple with input patterns and with each other. Oscillators whose frequencies match periodicities in the input tend to become activated. Metrical structure is represented explicitly in the network in the form of clusters of oscillators whose frequencies and phase angles are constrained to maintain the harmonic relationships that characterize meter. Rests in rhythmic patterns are represented by explicit rest oscillators in the network, which become activated when an expected beat in the pattern fails to appear. The model makes predictions about the relative difficulty of patterns and the effect of deviations from periodicity in the input. The nested periodicity that defines musical, and probably also linguistic, meter appears to be fundamental to the way in which people perceive and produce patterns in time. Meter by itself, however, is not sufficient to describe patterns which are interesting or memorable because of how they deviate from the metrical hierarchy. The simplest deviations are rests or gaps where one or more levels in the hierarchy would normally have a beat. When beats are removed at regular intervals which match the period of some level of the metrical hierarchy, we have what we will call a simple rhythmic pattern. Figure 1 shows an example of a simple rhythmic pattern. Below it is a grid representation of the meter which is behind the pattern. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,022
test
1-hop neighbor's text information: : Most connectionist modeling assumes noise-free inputs. This assumption is often violated. This paper introduces the idea of clearning, of simultaneously cleaning the data and learning the underlying structure. The cleaning step can be viewed as top-down processing (where the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). Clearning is used in conjunction with standard pruning. This paper discusses the statistical foundation of clearning, gives an interpretation in terms of a mechanical model, describes how to obtain both point predictions and conditional densities for the output, and shows how the resulting model can be used to discover properties of the data otherwise not accessible (such as the signal-to-noise ratio of the inputs). This paper uses clearning to predict foreign exchange rates, a noisy time series problem with well-known benchmark performances. On the out-of-sample 1993-1994 test period, clearning obtains an annualized return on investment above 30%, significantly better than an otherwise identical network. The final ultra-sparse network with 36 remaining non-zero input-to-hidden weights (of the 1035 initial weights between 69 inputs and 15 hidden units) is very robust against overfitting. This small network also lends itself to interpretation. 1-hop neighbor's text information: (1997) A nonparametric Bayesian approach to modelling nonlinear time series. : The Bayesian multivariate adaptive regression spline (BMARS) methodology of Denison et al. (1997) is extended to cope with nonlinear time series and financial datasets. The nonlinear time series model is closely related to the adaptive spline threshold autoregressive (ASTAR) method of Lewis and Stevens (1991) while the financial models can be thought of as Bayesian versions of both the generalised and simple autoregressive conditional het-eroscadastic (GARCH and ARCH) models. 1-hop neighbor's text information: Comparison of neural net and conventional techniques for lighting control. : We compare two techniques for lighting control in an actual room equipped with seven banks of lights and photoresistors to detect the lighting level at four sensing points. Each bank of lights can be independently set to one of sixteen intensity levels. The task is to determine the device intensity levels that achieve a particular configuration of sensor readings. One technique we explored uses a neural network to approximate the mapping between sensor readings and device intensity levels. The other technique we examined uses a conventional feedback control loop. The neural network approach appears superior both in that it does not require experimentation on the fly (and hence fluctuating light intensity levels during settling, and lengthy settling times) and in that it can deal with complex interactions that conventional control techniques do not handle well. This comparison was performed as part of the "Adaptive House" project, which is described briefly. Further directions for control in the Target text information: Predicting sunspots and exchange rates with connectionist networks. : We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. The ultimate goal is prediction accuracy. We analyze two time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. Weight-elimination also manages to extract some part of the dynamics of the notoriously noisy currency exchange rates and makes the network solution interpretable. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,289
test
1-hop neighbor's text information: M.C., "Neural Net Architectures for Temporal Sequence Processing," Predicting the future and understanding the past (Eds. : I present a general taxonomy of neural net architectures for processing time-varying patterns. This taxonomy subsumes many existing architectures in the literature, and points to several promising architectures that have yet to be examined. Any architecture that processes time-varying patterns requires two conceptually distinct components: a short-term memory that holds on to relevant past events and an associator that uses the short-term memory to classify or predict. My taxonomy is based on a characterization of short-term memory models along the dimensions of form, content, and adaptability. Experiments on predicting future values of a financial time series (US dollar-Swiss franc exchange rates) are presented using several alternative memory models. The results of these experiments serve as a baseline against which more sophisticated architectures can be compared. Neural networks have proven to be a promising alternative to traditional techniques for nonlinear temporal prediction tasks (e.g., Curtiss, Brandemuehl, & Kreider, 1992; Lapedes & Farber, 1987; Weigend, Huberman, & Rumelhart, 1992). However, temporal prediction is a particularly challenging problem because conventional neural net architectures and algorithms are not well suited for patterns that vary over time. The prototypical use of neural nets is in structural pattern recognition. In such a task, a collection of features|visual, semantic, or otherwise|is presented to a network and the network must categorize the input feature pattern as belonging to one or more classes. For example, a network might be trained to classify animal species based on a set of attributes describing living creatures such as "has tail", "lives in water", or "is carnivorous"; or a network could be trained to recognize visual patterns over a two-dimensional pixel array as a letter in fA; B; . . . ; Zg. In such tasks, the network is presented with all relevant information simultaneously. In contrast, temporal pattern recognition involves processing of patterns that evolve over time. The appropriate response at a particular point in time depends not only on the current input, but potentially all previous inputs. This is illustrated in Figure 1, which shows the basic framework for a temporal prediction problem. I assume that time is quantized into discrete steps, a sensible assumption because many time series of interest are intrinsically discrete, and continuous series can be sampled at a fixed interval. The input at time t is denoted x(t). For univariate series, this input 1-hop neighbor's text information: A `SELF-REFERENTIAL' WEIGHT MATRIX: Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without exception, all previous weight change algorithms have many specific limitations. Is it (in principle) possible to overcome limitations of hard-wired algorithms by allowing neural nets to run and improve their own weight change algorithms? This paper constructively demonstrates that the answer (in principle) is `yes'. I derive an initial gradient-based sequence learning algorithm for a `self-referential' recurrent network that can `speak' about its own weight matrix in terms of activations. It uses some of its input and output units for observing its own errors and for explicitly analyzing and modifying its own weight matrix, including those parts of the weight matrix responsible for analyzing and modifying the weight matrix. The result is the first `introspective' neural net with explicit potential control over all of its own adaptive parameters. A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals O(n conn logn conn ), where n conn is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient `introspective' or `self-referential' weight change algorithm, but to show that such algorithms are possible at all. 1-hop neighbor's text information: Locally Connected Recurrent Networks: Lai-Wan CHAN and Evan Fung-Yu YOUNG Computer Science Department, The Chinese University of Hong Kong New Territories, Hong Kong Email : [email protected] Technical Report : CS-TR-95-10 Abstract The fully connected recurrent network (FRN) using the on-line training method, Real Time Recurrent Learning (RTRL), is computationally expensive. It has a computational complexity of O(N 4 ) and storage complexity of O(N 3 ), where N is the number of non-input units. We have devised a locally connected recurrent model which has a much lower complexity in both computational time and storage space. The ring-structure recurrent network (RRN), the simplest kind of the locally connected has the corresponding complexity of O(mn+np) and O(np) respectively, where p, n and m are the number of input, hidden and output units respectively. We compare the performance between RRN and FRN in sequence recognition and time series prediction. We tested the networks' ability in temporal memorizing power and time warpping ability in the sequence recognition task. In the time series prediction task, we used both networks to train and predict three series; a periodic series with white noise, a deterministic chaotic series and the sunspots data. Both tasks show that RRN needs a much shorter training time and the performance of RRN is comparable to that of FRN. Target text information: A fixed size storage O(n 3 ) time complexity learning algorithm for fully recurrent continually running networks. : The RTRL algorithm for fully recurrent continually running networks (Robinson and Fallside, 1987)(Williams and Zipser, 1989) requires O(n 4 ) computations per time step, where n is the number of non-input units. I describe a method suited for on-line learning which computes exactly the same gradient and requires fixed-size storage of the same order but has an average time complexity 1 per time step of O(n 3 ). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,370
test
1-hop neighbor's text information: Developments in probabilistic modelling with neural networks| ensemble learning. : In this paper I give a review of ensemble learning using a simple example. 1-hop neighbor's text information: Flat minima. Neural Computation. : We present a new algorithm for finding low complexity neural networks with high generalization capability. The algorithm searches for a "flat" minimum of the error function. A flat minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to "simple" networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require Gaussian assumptions and does not depend on a "good" weight prior instead we have a prior over input/output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second order derivatives, it has backprop's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms (1) conventional backprop, (2) weight decay, (3) "optimal brain surgeon" / "optimal brain damage". We also provide pseudo code of the algorithm (omitted from the NC-version). 1-hop neighbor's text information: MacKay (1995). Probabilistic networks: new models and new methods. : In this paper I describe the implementation of a probabilistic regression model in BUGS. BUGS is a program that carries out Bayesian inference on statistical problems using a simulation technique known as Gibbs sampling. It is possible to implement surprisingly complex regression models in this environment. I demonstrate the simultaneous inference of an interpolant and an input-dependent noise level. Target text information: Keeping neural networks simple by minimizing the description length of the weights. : Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-off between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units. Provided the output units are linear, the exact derivatives can be computed efficiently without time-consuming Monte Carlo simulations. The idea of minimizing the amount of information that is required to communicate the weights of a neural network leads to a number of interesting schemes for encoding the weights. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,271
test
1-hop neighbor's text information: A comparison of selection schemes used in genetic algorithms. : TIK-Report Nr. 11, December 1995 Version 2 (2. Edition) 1-hop neighbor's text information: Causality in genetic programming. : Machine learning aims towards the acquisition of knowledge based on either experience from the interaction with the external environment or by analyzing the internal problem-solving traces. Both approaches can be implemented in the Genetic Programming (GP) paradigm. [Hillis, 1990] proves in an ingenious way how the first approach can work. There have not been any significant tests to prove that GP can take advantage of its own search traces. This paper presents an approach to automatic discovery of functions in GP based on the ideas of discovery of useful building blocks by analyzing the evolution trace, generalizing of blocks to define new functions and finally adapting of the problem representation on-the-fly. Adaptation of the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Complexity measures of solution trees are defined for an adaptive representation framework and empirical results are presented. This material is based on work supported by the National Science Foundation under Grant numbered IRI-8903582 by NIH/PHS research grant numbered 1 R24 RR06853-02 and by a Human Science Frontiers Program research grant. The government has certain rights in this material. 1-hop neighbor's text information: Genetic programming and redundancy. : The Genetic Programming optimization method (GP) elaborated by John Koza [ Koza, 1992 ] is a variant of Genetic Algorithms. The search space of the problem domain consists of computer programs represented as parse trees, and the crossover operator is realized by an exchange of subtrees. Empirical analyses show that large parts of those trees are never used or evaluated which means that these parts of the trees are irrelevant for the solution or redundant. This paper is concerned with the identification of the redundancy occuring in GP. It starts with a mathematical description of the behavior of GP and the conclusions drawn from that description among others explain the "size problem" which denotes the phenomenon that the average size of trees in the population grows with time. Target text information: Evolving compact solutions in genetic programming: A case study. : Genetic programming (GP) is a variant of genetic algorithms where the data structures handled are trees. This makes GP especially useful for evolving functional relationships or computer programs, as both can be represented as trees. Symbolic regression is the determination of a function dependence y = g(x) that approximates a set of data points (x i ; y i ). In this paper the feasibility of symbolic regression with GP is demonstrated on two examples taken from different domains. Furthermore several suggested methods from literature are compared that are intended to improve GP performance and the readability of solutions by taking into account introns or redundancy that occurs in the trees and keeping the size of the trees small. The experiments show that GP is an elegant and useful tool to derive complex functional dependencies on numerical data. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,123
val
1-hop neighbor's text information: Poole (1997). A constraint-based approach to preference elicitation and decision making. : We investigate the solution of constraint-based configuration problems in which the preference function over outcomes is unknown or incompletely specified. The aim is to configure a system, such as a personal computer, so that it will be optimal for a given user. The goal of this project is to develop algorithms that generate the most preferred feasible configuration by posing preference queries to the user. In order to minimize the number and the complexity of preference queries posed to the user, the algorithm reasons about the user's preferences while taking into account constraints over the set of feasible configurations. We assume that the user can structure their preferences in a particular way that, while natural in many settings, can be exploited during the optimization process. We also address in a preliminary fashion the trade-offs between computational effort in the solution of a problem and the degree of interaction with the user. Target text information: Utility Elicitation as a Classification Problem: We investigate the application of classification techniques to utility elicitation. In a decision problem, two sets of parameters must generally be elicited: the probabilities and the utilities. While the prior and conditional probabilities in the model do not change from user to user, the utility models do. Thus it is necessary to elicit a utility model separately for each new user. Elicitation is long and tedious, particularly if the outcome space is large and not decomposable. There are two common approaches to utility function elicitation. The first is to base the determination of the user's utility function solely on elicitation of qualitative preferences. The second makes assumptions about the form and decomposability of the utility function. Here we take a different approach: we attempt to identify the new user's utility function based on classification relative to a database of previously collected utility functions. We do this by identifying clusters of utility functions that minimize an appropriate distance measure. Having identified the clusters, we develop a classification scheme that requires many fewer and simpler assessments than full utility elicitation and is more robust than utility elicitation based solely on preferences. We have tested our algorithm on a small database of utility functions in a prenatal diagnosis domain and the results are quite promising. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
471
val
1-hop neighbor's text information: Knowledge representation for supporting decision model formulation in medicine. : This paper outlines a methodology for analyzing the representational support for knowledge-based decision-modeling in a broad domain. A relevant set of inference patterns and knowledge types are identified. By comparing the analysis results to existing representations, some insights are gained into a design approach for integrating categorical and uncertain knowledge in a context sensitive manner. 1-hop neighbor's text information: From knowledge bases to decision models. : Modeling techniques developed recently in the AI and uncertain reasoning communities permit significantly more flexible specifications of probabilistic knowledge. Specifically, graphical decision-modeling formalisms|belief networks, influence diagrams, and their variants|provide compact representation of probabilistic relationships, and support inference algorithms that automatically exploit the dependence structure in such models [1, 3, 4]. These advances have brought on a resurgence of interest in computational decision systems based on normative theories of belief and preference. However, graphical decision-modeling languages are still quite limited for purposes of knowledge representation because, while they can describe the relationships among particular event instances, they cannot capture general knowledge about probabilistic relationships across classes of events. The inability to capture general knowledge is a serious impediment for those AI tasks in which the relevant factors of a decision problem cannot be enumerated in advance. A graphical decision model encodes a particular set of probabilistic dependencies, a predefined set of decision alternatives, and a specific mathematical form for a utility function. Given a properly specified model, there exist relatively efficient algorithms for calculating posterior probabilities and optimal decision policies. A range of similar cases may be handled by parametric variations of the original model. However, if the structure of dependencies, the set of available alternatives, or the form of utility function changes from situation to situation, then a fixed network representation is no longer adequate. An ideal computational decision system would possess general, broad knowledge of a domain, but would have the ability to reason about the particular circumstances of any given decision problem within the domain. One obvious approach|which we call call knowledge-based model construction (KBMC)|is to generate a decision model dynamically at run-time, based on the problem description and information received thus far. Model construction consists of selection, instantiation, and assembly of causal and associational relationships from a broad knowledge base of general relationships among domain concepts. For example, suppose we wish to develop a system to recommend appropriate actions for maintaining a computer network. The natural graphical decision model would include chance Target text information: Abstract: Automated decision making is often complicated by the complexity of the knowledge involved. Much of this complexity arises from the context-sensitive variations of the underlying phenomena. We propose a framework for representing descriptive, context-sensitive knowledge. Our approach attempts to integrate categorical and uncertain knowledge in a network formalism. This paper outlines the basic representation constructs, examines their expressiveness and efficiency, and discusses the potential applications of the framework. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
809
test
1-hop neighbor's text information: Toward Rational Planning and Replanning Rational Reason Maintenance, Reasoning Economies, and Qualitative Preferences formal notions: Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the subplans improves on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. To address these problems, we developed (1) revision methods aimed at revising only those beliefs and plans worth revising, and tolerating incoherence and ungroundedness when these are judged less detrimental than a costly revision effort, (2) an artificial market economy in planning and revision tasks for arriving at overall judgments of worth, and (3) a representation for qualitative preferences that permits capture of common forms of dominance information. We view the activities of intelligent agents as stemming from interleaved or simultaneous planning, replanning, execution, and observation subactivities. In this model of the plan construction process, the agents continually evaluate and revise their plans in light of what happens in the world. Planning is necessary for the organization of large-scale activities because decisions about actions to be taken in the future have direct impact on what should be done in the shorter term. But even if well-constructed, the value of a plan decays as changing circumstances, resources, information, or objectives render the original course of action inappropriate. When changes occur before or during execution of the plan, it may be necessary to construct a new plan by starting from scratch or by revising a previous plan. only the portions of the plan actually affected by the changes. Given the information accrued during plan execution, which remaining parts of the original plan should be salvaged and in what ways should other parts be changed? Incremental replanning first involves localizing the potential changes or conflicts by identifying the subset of the extant beliefs and plans in which they occur. It then involves choosing which of the identified beliefs and plans to keep and which to change. For greatest efficiency, the choices of what portion of the plan to revise and how to revise it should be based on coherent expectations about and preferences among the consequences of different alternatives so as to be rational in the sense of decision theory (Savage 1972). Our work toward mechanizing rational planning and replanning has focussed on four main issues: This paper focusses on the latter three issues; for our approach to the first, see (Doyle 1988; 1992). Replanning in an incremental and local manner requires that the planning procedures routinely identify the assumptions made during planning and connect plan elements with these assumptions, so that replan-ning may seek to change only those portions of a plan dependent upon assumptions brought into question by new information. Consequently, the problem of revising plans to account for changed conditions has much 1-hop neighbor's text information: On the semantics of belief revision systems. : We consider belief revision operators that satisfy the Alchourron-Gardenfors-Makinson postulates, and present an epistemic logic in which, for any such revision operator, the result of a revision can be described by a sentence in the logic. In our logic, the fact that the agent's set of beliefs is is represented by the sentence O, where O is Levesque's `only know' operator. Intuitively, O is read as ` is all that is believed.' The fact that the agent believes is represented by the sentence B , read in the usual way as ` is believed'. The connective represents update as defined by Katsuno and Mendelzon. The revised beliefs are represented by the sentence O B . We show that for every revision operator that satisfies the AGM postulates, there is a model for our epistemic logic such that the beliefs implied by the sentence O B in this model correspond exactly to the sentences implied by the theory that results from revising by . This means that reasoning about changes in the agent's beliefs reduces to model checking of certain epistemic sentences. The negative result in the paper is that this type of formal account of revision cannot be extended to the situation where the agent is able to reason about its beliefs. A fully introspective agent cannot use our construction to reason about the results of its own revisions, on pain of triviality. 1-hop neighbor's text information: Rationality and its Roles in Reasoning (extended version), : The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems. Target text information: Rational belief revision (preliminary report). : Theories of rational belief revision recently proposed by Gardenfors and Nebel illuminate many important issues but impose unnecessarily strong standards for correct revisions and make strong assumptions about what information is available to guide revisions. We reconstruct these theories according to an economic standard of rationality in which preferences are used to select among alternative possible revisions. By permitting multiple partial specifications of preferences in ways closely related to preference-based nonmonotonic logics, the reconstructed theory employs information closer to that available in practice and offers more flexible ways of selecting revisions. We formally compare this notion of rational belief revision with those of Gardenfors and Nebel, adapt results about universal default theories to prove that there is no universal method of rational belief revision, and examine formally how different limitations on rationality affect belief revision. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,618
val
1-hop neighbor's text information: Duplication of coding segments in genetic programming. : Research into the utility of non-coding segments, or introns, in genetic-based encodings has shown that they expedite the evolution of solutions in domains by protecting building blocks against destructive crossover. We consider a genetic programming system where non-coding segments can be removed, and the resultant chromosomes returned into the population. This parsimonious repair leads to premature convergence, since as we remove the naturally occurring non-coding segments, we strip away their protective backup feature. We then duplicate the coding segments in the repaired chromosomes, and place the modified chromosomes into the population. The duplication method significantly improves the learning rate in the domain we have considered. We also show that this method can be applied to other domains. 1-hop neighbor's text information: Collective memory search. : 1-hop neighbor's text information: Type inheritance in strongly typed genetic programming. : This paper appears as chapter 18 of Kenneth E. Kinnear, Jr. and Peter J. Angeline, editors Advances in Genetic Programming 2, MIT Press, 1996. Abstract Genetic Programming (GP) is an automatic method for generating computer programs, which are stored as data structures and manipulated to evolve better programs. An extension restricting the search space is Strongly Typed Genetic Programming (STGP), which has, as a basic premise, the removal of closure by typing both the arguments and return values of functions, and by also typing the terminal set. A restriction of STGP is that there are only two levels of typing. We extend STGP by allowing a type hierarchy, which allows more than two levels of typing. Target text information: A Comparison of Random Search versus Genetic Programming as Engines for Collective Adaptation: We have integrated the distributed search of genetic programming (GP) based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. Since the pure GP approach does not scale well with problem complexity, a natural question is which of the two components is actually contributing to the search process. We investigate a collective memory search which utilizes a random search engine and find that it significantly outperforms the GP based search engine. We examine the solution space and show that as problem complexity and search space grow, a collective adaptive system will perform better than a collective memory search employing random search as an engine. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
509
test
1-hop neighbor's text information: Eiben and C.A. Schippers. Multi-parent\'s niche: n-ary crossovers on NK-landscapes. : Using the multi-parent diagonal and scanning crossover in GAs reproduction operators obtain an adjustable arity. Hereby sexuality becomes a graded feature instead of a Boolean one. Our main objective is to relate the performance of GAs to the extent of sexuality used for reproduction on less arbitrary functions then those reported in the current literature. We investigate GA behaviour on Kauffman's NK-landscapes that allow for systematic characterization and user control of ruggedness of the fitness landscape. We test GAs with a varying extent of sexuality, ranging from asexual to 'very sexual'. Our tests were performed on two types of NK-landscapes: landscapes with random and landscapes with nearest neighbour epistasis. For both landscape types we selected landscapes from a range of ruggednesses. The results confirm the superiority of (very) sexual recombination on mildly epistatic problems. 1-hop neighbor's text information: "Using Problem Generators to Explore the Effects of Epistasis," : In this paper we develop an empirical methodology for studying the behavior of evolutionary algorithms based on problem generators. We then describe three generators that can be used to study the effects of epistasis on the performance of EAs. Finally, we illustrate the use of these ideas in a preliminary exploration of the effects of epistasis on simple GAs. 1-hop neighbor's text information: Smith (1995), "A genetic approach to the quadratic assignment problem", : Augmenting genetic algorithms with local search heuristics is a promising approach to the solution of combinatorial optimization problems. In this paper, a genetic local search approach to the quadratic assignment problem (QAP) is presented. New genetic operators for realizing the approach are described, and its performance is tested on various QAP instances containing between 30 and 256 facilities/locations. The results indicate that the proposed algorithm is able to arrive at high quality solutions in a relatively short time limit: for the largest publicly known prob lem instance, a new best solution could be found. Target text information: On the Effectiveness of Evolutionary Search in High-Dimensional NK-Landscapes: NK-landscapes offer the ability to assess the performance of evolutionary algorithms on problems with different degrees of epistasis. In this paper, we study the performance of six algorithms in NK-landscapes with low and high dimension while keeping the amount of epistatic interactions constant. The results show that compared to genetic local search algorithms, the performance of standard genetic algorithms employing crossover or mutation significantly decreases with increasing problem size. Furthermore, with increasing K, crossover based algorithms are in both cases outperformed by mutation based algorithms. However, the relative performance differences between the algorithms grow significantly with the dimension of the search space, indicating that it is important to consider high-dimensional landscapes for evaluating the performance of evolutionary algorithms. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
506
test
1-hop neighbor's text information: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. 1-hop neighbor's text information: Learning concepts by asking questions. In R.S. : Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take an active part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. Learning begins when the trainer shows Marvin an example of the concept to be learned. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept. Target text information: Inductive Logic Programming: A new research area, Inductive Logic Programming, is presently emerging. While inheriting various positive characteristics of the parent subjects of Logic Programming and Machine Learning, it is hoped that the new area will overcome many of the limitations of its forebears. The background to present developments within this area is discussed and various goals and aspirations for the increasing body of researchers are identified. Inductive Logic Programming needs to be based on sound principles from both Logic and Statistics. On the side of statistical justification of hypotheses we discuss the possible relationship between Algorithmic Complexity theory and Probably-Approximately-Correct (PAC) Learning. In terms of logic we provide a unifying framework for Muggleton and Buntine's Inverse Resolution (IR) and Plotkin's Relative Least General Generali-sation (RLGG) by rederiving RLGG in terms of IR. This leads to a discussion of the feasibility of extending the RLGG framework to allow for the invention of new predicates, previously discussed only within the context of IR. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
821
val
1-hop neighbor's text information: Compositional modeling with dpns. : 1-hop neighbor's text information: Probabilistic independence networks for hidden Markov probability models. : Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach. This report describes research done at the Department of Information and Computer Science, University of California, Irvine, the Jet Propulsion Laboratory, California Institute of Technology, Microsoft Research, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The authors can be contacted as [email protected], [email protected], and [email protected]. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. MIJ gratefully acknowledges discussions with Steffen Lauritzen on the application of the IPF algorithm to UPINs. 1-hop neighbor's text information: Factorial hidden Markov models. : Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable|the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forward-backward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach's chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot. Target text information: Speech recognition with dynamic Bayesian networks. : Dynamic Bayesian networks (DBNs) are a useful tool for representing complex stochastic processes. Recent developments in inference and learning in DBNs allow their use in real-world applications. In this paper, we apply DBNs to the problem of speech recognition. The factored state representation enabled by DBNs allows us to explicitly represent long-term articulatory and acoustic context in addition to the phonetic-state information maintained by hidden Markov models (HMMs). Furthermore, it enables us to model the short-term correlations among multiple observation streams within single time-frames. Given a DBN structure capable of representing these long- and short-term correlations, we applied the EM algorithm to learn models with up to 500,000 parameters. The use of structured DBN models decreased the error rate by 12 to 29% on a large-vocabulary isolated-word recognition task, compared to a discrete HMM; it also improved significantly on other published results for the same task. This is the first successful application of DBNs to a large-scale speech recognition problem. Investigation of the learned models indicates that the hidden state variables are strongly correlated with acoustic properties of the speech signal. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,613
test
1-hop neighbor's text information: "Visual information processing in primate cone pathways: Part I, a model," : Target text information: : Information Processing in Primate Retinal Cone Pathways: Experiments and Results I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
858
test
1-hop neighbor's text information: Phenes and the Baldwin Effect: Learning and evolution in a simulated population, : The Baldwin Effect, first proposed in the late nineteenth century, suggests that the course of evolutionary change can be influenced by individually learned behavior. The existence of this effect is still a hotly debated topic. In this paper clear evidence is presented that learning-based plasticity at the phenotypic level can and does produce directed changes at the genotypic level. This research confirms earlier experimental work done by others, notably Hinton & Nowlan (1987). Further, the amount of plasticity of the learned behavior is shown to be crucial to the size of the Baldwin Effect: either too little or too much and the effect disappears or is significantly reduced. Finally, for learnable traits, the case is made that over many generations it will become easier for the population as a whole to learn these traits (i.e. the phenotypic plasticity of these traits will increase). In this gradual transition from a genetically driven population to one driven by learning, the importance of the Baldwin Effect decreases. 1-hop neighbor's text information: No Gain: Landscapes, Learning Costs and Genetic Assimilation, submitted to EC and University of Sussex, CSRP 409(?) [Merezhkovsky, KS, 1920] in Khakina LN (1992) Concepts of Symbiogenesis: History of Symbiogenesis as an evolutionary mechanism, : The evolution of a population can be guided by phenotypic traits acquired by members of that population during their lifetime. This phenomenon, known as the Baldwin Effect, can speed the evolutionary process as traits that are initially acquired become genetically specified in later generations. This paper presents conditions under which this genetic assimilation can take place. As well as the benefits that lifetime adaptation can give a population, there may be a cost to be paid for that adaptive ability. It is the evolutionary trade-off between these costs and benefits that provides the selection pressure for acquired traits to become genetically specified. It is also noted that genotypic space, in which evolution operates, and phenotypic space, on which adaptive processes (such as learning) operate, are, in general, of a different nature. To guarantee an acquired characteristic can become genetically specified, then these spaces must have the property of neighbourhood correlation which means that a small distance between two individuals in phenotypic space implies that there is a small distance between the same two individuals in genotypic space. Target text information: A study of the effects of group formation on evolutionary search: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
857
val
1-hop neighbor's text information: On convergence properties of the em algorithm for gaussian mixtures. : We build up the mathematical connection between the "Expectation-Maximization" (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P , and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. 1-hop neighbor's text information: Scatter-partitioning RBF network for function regression and image: segmentation: Preliminary results Abstract. Scatter-partitioning Radial Basis Function (RBF) networks increase their number of degrees of freedom with the complexity of an input-output mapping to be estimated on the basis of a supervised training data set. Due to its superior expressive power a scatter-partitioning Gaussian RBF (GRBF) model, termed Supervised Growing Neural Gas (SGNG), is selected from the literature. SGNG employs a one-stage error-driven learning strategy and is capable of generating and removing both hidden units and synaptic connections. A slightly modified SGNG version is tested as a function estimator when the training surface to be fitted is an image, i.e., a 2-D signal whose size is finite. The relationship between the generation, by the learning system, of disjointed maps of hidden units and the presence, in the image, of pictorially homogeneous subsets (segments) is investigated. Unfortunately, the examined SGNG version performs poorly both as function estimator and image segmenter. This may be due to an intrinsic inadequacy of the one-stage error-driven learning strategy to adjust structural parameters and output weights simultaneously but consistently. In the framework of RBF networks, further studies should investigate the combination of two-stage error-driven learning strategies with synapse generation and removal criteria. y Internal report of the paper entitled "Image segmentation with scatter-partitioning RBF networks: A feasibility study," to be presented at the conference Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation, part of SPIE's International Symposium on Optical Science, Engineering and Instrumentation, 19-24 July 1998, San Diego, CA. Target text information: "Soft vector quantization and the EM algorithm," : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,465
test
1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models 1-hop neighbor's text information: M.C., "Neural Net Architectures for Temporal Sequence Processing," Predicting the future and understanding the past (Eds. : I present a general taxonomy of neural net architectures for processing time-varying patterns. This taxonomy subsumes many existing architectures in the literature, and points to several promising architectures that have yet to be examined. Any architecture that processes time-varying patterns requires two conceptually distinct components: a short-term memory that holds on to relevant past events and an associator that uses the short-term memory to classify or predict. My taxonomy is based on a characterization of short-term memory models along the dimensions of form, content, and adaptability. Experiments on predicting future values of a financial time series (US dollar-Swiss franc exchange rates) are presented using several alternative memory models. The results of these experiments serve as a baseline against which more sophisticated architectures can be compared. Neural networks have proven to be a promising alternative to traditional techniques for nonlinear temporal prediction tasks (e.g., Curtiss, Brandemuehl, & Kreider, 1992; Lapedes & Farber, 1987; Weigend, Huberman, & Rumelhart, 1992). However, temporal prediction is a particularly challenging problem because conventional neural net architectures and algorithms are not well suited for patterns that vary over time. The prototypical use of neural nets is in structural pattern recognition. In such a task, a collection of features|visual, semantic, or otherwise|is presented to a network and the network must categorize the input feature pattern as belonging to one or more classes. For example, a network might be trained to classify animal species based on a set of attributes describing living creatures such as "has tail", "lives in water", or "is carnivorous"; or a network could be trained to recognize visual patterns over a two-dimensional pixel array as a letter in fA; B; . . . ; Zg. In such tasks, the network is presented with all relevant information simultaneously. In contrast, temporal pattern recognition involves processing of patterns that evolve over time. The appropriate response at a particular point in time depends not only on the current input, but potentially all previous inputs. This is illustrated in Figure 1, which shows the basic framework for a temporal prediction problem. I assume that time is quantized into discrete steps, a sensible assumption because many time series of interest are intrinsically discrete, and continuous series can be sampled at a fixed interval. The input at time t is denoted x(t). For univariate series, this input 1-hop neighbor's text information: Tau Net: A Neural Network for Modeling Temporal Variability: The ability to handle temporal variation is important when dealing with real-world dynamic signals. In many applications, inputs do not come in as fixed-rate sequences, but rather as signals with time scales that can vary from one instance to the next; thus, modeling dynamic signals requires not only the ability to recognize sequences but also the ability to handle temporal changes in the signal. This paper discusses "Tau Net," a neural network for modeling dynamic signals, and its application to speech. In Tau Net, sequence learning is accomplished using a combination of prediction, recurrence and time-delay connections. Temporal variability is modeled by having adaptable time constants in the network, which are adjusted with respect to the prediction error. Adapting the time constants changes the time scale of the network, and the adapted value of the network's time constant provides a measure of temporal variation in the signal. Tau Net has been applied to several simple signals: sets of sine waves differing in frequency and in phase [2], a multidimensional signal representing the walking gait of children [3], and the energy contour of a simple speech utterance [11]. Tau Net has also been shown to work on a voicing distinction task using synthetic speech data [12]. In this paper, Tau Net is applied to two speaker-independent tasks, vowel recognition (of f/ae/,/iy/,/ux/g) and consonant recognition (of f/p/,/t/,/k/g) using speech data taken from the TIMIT database. It is shown that Tau Nets, trained on medium-rate tokens, achieved about the same performance as networks without time constants trained on tokens at all rates, and performed better than networks without time constants trained on medium-rate tokens. Our results demonstrate Tau Net's ability to identify vowels and consonants at variable speech rates by extrapolating to rates not represented in the training set. Target text information: Induction of multiscale temporal structure. : Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is available at any instant. Although variants of back propagation can in principle be used to find structure in sequences, in practice they are not sufficiently powerful to discover arbitrary contingencies, especially those spanning long temporal intervals or involving high order statistics. For example, in designing a connectionist network for music composition, we have encountered the problem that the net is able to learn musical structure that occurs locally in time|e.g., relations among notes within a musical phrase|but not structure that occurs over longer time periods|e.g., relations among phrases. To address this problem, we require a means of constructing a reduced description of the sequence that makes global aspects more explicit or more readily detectable. I propose to achieve this using hidden units that operate with different time constants. Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard Many patterns in the world are intrinsically temporal, e.g., speech, music, the unfolding of events. Recurrent neural net architectures have been devised to accommodate time-varying sequences. For example, the architecture shown in Figure 1 can map a sequence of inputs to a sequence of outputs. Learning structure in temporally-extended sequences is a difficult computational problem because the input pattern may not contain all the task-relevant information at any instant. Thus, back propagation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,569
test
1-hop neighbor's text information: Cortical activity flips among quasi-stationary states. : M. Abeles, H. Bergman and E. Vaadia, School of Medicine and Center for Neural Computation Hebrew University, POB 12272, Jerusalem 91120, Is-rael. E. Seidemann and I. Meilijson, School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, and School of Medicine, Tel Aviv University, 69978 Tel Aviv, Israel. I. Gat and N. Tishby, Institute of Computer Science and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel. Target text information: Hidden Markov Modeling of simultaneously recorded cells in the Associative cortex of behaving monkeys: A widely held idea regarding information processing in the brain is the cell-assembly hypothesis suggested by Hebb in 1949. According to this hypothesis, the basic unit of information processing in the brain is an assembly of cells, which can act briefly as a closed system, in response to a specific stimulus. This work presents a novel method of characterizing this supposed activity using a Hidden Markov Model. This model is able to reveal some of the underlying cortical network activity of behavioral processes. In our study the process in hand was the simultaneous activity of several cells recorded from the frontal cortex of behaving monkeys. Using such a model we were able to identify the behavioral mode of the animal and directly identify the corresponding collective network activity. Furthermore, the segmentation of the data into the discrete states also provides direct evidence for the state dependency of the short-time correlation functions between the same pair of cells. Thus, this cross-correlation depends on the network state of activity and not on local connectivity alone. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
572
val
1-hop neighbor's text information: Integrated Architectures for Learning, Planning and Reacting Based on Approximating Dynamic Programming, : This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments. 1-hop neighbor's text information: Dynamic Programming and Markov Processes. : The problem of maximizing the expected total discounted reward in a completely observable Markovian environment, i.e., a Markov decision process (mdp), models a particular class of sequential decision problems. Algorithms have been developed for making optimal decisions in mdps given either an mdp specification or the opportunity to interact with the mdp over time. Recently, other sequential decision-making problems have been studied prompting the development of new algorithms and analyses. We describe a new generalized model that subsumes mdps as well as many of the recent variations. We prove some basic results concerning this model and develop generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning that can be used to make optimal decisions in the generalized model under various assumptions. Applications of the theory to particular models are described, including risk-averse mdps, exploration-sensitive mdps, sarsa, Q-learning with spreading, two-player games, and approximate max picking via sampling. Central to the results are the contraction property of the value operator and a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence. 1-hop neighbor's text information: A model for projection and action. : In designing autonomous agents that deal competently with issues involving time and space, there is a tradeoff to be made between guaranteed response-time reactions on the one hand, and flexibility and expressiveness on the other. We propose a model of action with probabilistic reasoning and decision analytic evaluation for use in a layered control architecture. Our model is well suited to tasks that require reasoning about the interaction of behaviors and events in a fixed temporal horizon. Decisions are continuously reevaluated, so that there is no problem with plans becoming obsolete as new information becomes available. In this paper, we are particularly interested in the tradeoffs required to guarantee a fixed reponse time in reasoning about nondeterministic cause-and-effect relationships. By exploiting approximate decision making processes, we are able to trade accuracy in our predictions for speed in decision making in order to improve expected per formance in dynamic situations. Target text information: Kanazawa, Reasoning about Time and Probability, : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,087
train
1-hop neighbor's text information: Back. The Gamma MLP for speech phoneme recognition. : We define a Gamma multi-layer perceptron (MLP) as an MLP with the usual synaptic weights replaced by gamma filters (as proposed by de Vries and Principe (de Vries & Principe 1992)) and associated gain terms throughout all layers. We derive gradient descent update equations and apply the model to the recognition of speech phonemes. We find that both the inclusion of gamma filters in all layers, and the inclusion of synaptic gains, improves the performance of the Gamma MLP. We compare the Gamma MLP with TDNN, Back-Tsoi FIR MLP, and Back-Tsoi IIR MLP architectures, and a local approximation scheme. We find that the Gamma MLP results in a substantial reduction in error rates. Target text information: The Gamma MLP Using Multiple Temporal Resolutions for Improved Classification: We have previously introduced the Gamma MLP which is defined as an MLP with the usual synaptic weights replaced by gamma filters and associated gain terms throughout all layers. In this paper we apply the Gamma MLP to a larger scale speech phoneme recognition problem, analyze the operation of the network, and investigate why the Gamma MLP can perform better than alternatives. The Gamma MLP is capable of employing multiple temporal resolutions (the temporal resolution is defined here, as per de Vries and Principe, as the number of parameters of freedom (i.e. the number of tap variables) per unit of time in the gamma memory this is equal to the gamma memory parameter as detailed in the paper). Multiple temporal resolutions may be advantageous for certain problems, e.g. different resolutions may be optimal for extracting different features from the input data. For the problem in this paper, the Gamma MLP is observed to use a large range of temporal resolutions. In comparison, TDNN networks typically use only a single temporal resolution. Further motivation for the Gamma MLP is related to the curse of dimensionality and the ability of the Gamma MLP to trade off temporal resolution for memory depth, and therefore increase memory depth without increasing the dimensionality of the network. The IIR MLP is a more general version of the Gamma MLP however the IIR MLP performs poorly for the problem in this paper. Investigation suggests that the error surface of the Gamma MLP is more suitable for gradient descent training than the error surface of the IIR MLP. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
435
test
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. 1-hop neighbor's text information: Learning without state-estimation in Partially Observable Markovian Decision Processes, : Reinforcement learning (RL) algorithms provide a sound theoretical basis for building learning control architectures for embedded agents. Unfortunately all of the theory and much of the practice (see Barto et al., 1983, for an exception) of RL is limited to Marko-vian decision processes (MDPs). Many real-world decision tasks, however, are inherently non-Markovian, i.e., the state of the environment is only incompletely known to the learning agent. In this paper we consider only partially observable MDPs (POMDPs), a useful class of non-Markovian decision processes. Most previous approaches to such problems have combined computationally expensive state-estimation techniques with learning control. This paper investigates learning in POMDPs without resorting to any form of state estimation. We present results about what TD(0) and Q-learning will do when applied to POMDPs. It is shown that the conventional discounted RL framework is inadequate to deal with POMDPs. Finally we develop a new framework for learning without state-estimation in POMDPs by including stochastic policies in the search space, and by defining the value or utility of a dis tribution over states. 1-hop neighbor's text information: On the convergence of stochastic iterative dynamic programming algorithms. : This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program. Michael I. Jordan is a NSF Presidential Young Investigator. Target text information: Reinforcement learning algorithm for partially observable Markov decision problems. : Increasing attention has been paid to reinforcement learning algorithms in recent years, partly due to successes in the theoretical analysis of their behavior in Markov environments. If the Markov assumption is removed, however, neither generally the algorithms nor the analyses continue to be usable. We propose and analyze a new learning algorithm to solve a certain class of non-Markov decision problems. Our algorithm applies to problems in which the environment is Markov, but the learner has restricted access to state information. The algorithm involves a Monte-Carlo policy evaluation combined with a policy improvement method that is similar to that of Markov decision problems and is guaranteed to converge to a local maximum. The algorithm operates in the space of stochastic policies, a space which can yield a policy that performs considerably better than any deterministic policy. Although the space of stochastic policies is continuous|even for a discrete action space|our algorithm is computationally tractable. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
986
test
1-hop neighbor's text information: The Expandable Split Window Paradigm for Exploiting Fine-Grain Parallelism, : We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results. 1-hop neighbor's text information: Limits of control flow on parallelism. : This paper discusses three techniques useful in relaxing the constraints imposed by control flow on parallelism: control dependence analysis, executing multiple flows of control simultaneously, and speculative execution. We evaluate these techniques by using trace simulations to find the limits of parallelism for machines that employ different combinations of these techniques. We have three major results. First, local regions of code have limited parallelism, and control dependence analysis is useful in extracting global parallelism from different parts of a program. Second, a superscalar processor is fundamentally limited because it cannot execute independent regions of code concurrently. Higher performance can be obtained with machines, such as multiprocessors and dataflow machines, that can simultaneously follow multiple flows of control. Finally, without speculative execution to allow instructions to execute before their control dependences are resolved, only modest amounts of parallelism can be obtained for programs with complex control flow. 1-hop neighbor's text information: A hardware mechanism for dynamic reordering of memory references. : Target text information: Control Flow Prediction for Dynamic ILP Processors. : We introduce a technique to enhance the ability of dynamic ILP processors to exploit (speculatively executed) parallelism. Existing branch prediction mechanisms used to establish a dynamic window from which ILP can be extracted are limited in their abilities to: (i) create a large, accurate dynamic window, (ii) initiate a large number of instructions into this window in every cycle, and (iii) traverse multiple branches of the control flow graph per prediction. We introduce control flow prediction which uses information in the control flow graph of a program to overcome these limitations. We discuss how information present in the control flow graph can be represented using multiblocks, and conveyed to the hardware using Control Flow Tables and Control Flow Prediction Buffers. We evaluate the potential of control flow prediction on an abstract machine and on a dynamic ILP processing model. Our results indicate that control flow prediction is a powerful and effective assist to the hardware in making more informed run time decisions about program control flow. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,034
test
1-hop neighbor's text information: Exploiting Choice: Instruction Fetch and Issue on an implementable Simultaneous Multithread-ing Processor. : Simultaneous multithreading is a technique that permits multiple independent threads to issue multiple instructions each cycle. In previous work we demonstrated the performance potential of simultaneous multithreading, based on a somewhat idealized model. In this paper we show that the throughput gains from simultaneous multithreading can be achieved without extensive changes to a conventional wide-issue superscalar, either in hardware structures or sizes. We present an architecture for simultaneous multithreading that achieves three goals: (1) it minimizes the architectural impact on the conventional superscalar design, (2) it has minimal performance impact on a single thread executing alone, and (3) it achieves significant throughput gains when running multiple threads. Our simultaneous multithreading architecture achieves a throughput of 5.4 instructions per cycle, a 2.5-fold improvement over an unmodified superscalar with similar hardware resources. This speedup is enhanced by an advantage of multithreading previously unexploited in other architectures: the ability to favor for fetch and issue those threads most efficiently using the processor each cycle, thereby providing the best instructions to the processor. 1-hop neighbor's text information: Threaded multiple path execution. : This paper presents Threaded Multi-Path Execution (TME), which exploits existing hardware on a Simultaneous Multi-threading (SMT) processor to speculatively execute multiple paths of execution. When there are fewer threads in an SMT processor than hardware contexts, threaded multi-path execution uses spare contexts to fetch and execute code along the less likely path of hard-to-predict branches. This paper describes the hardware mechanisms needed to enable an SMT processor to efficiently spawn speculative threads for threaded multi-path execution. The Mapping Synchronization Bus is described, which enables the spawning of these multiple paths. Policies are examined for deciding which branches to fork, and for managing competition between primary and alternate path threads for critical resources. Our results show that TME increases the single program performance of an SMT with eight thread contexts by 14%-23% on average, depending on the misprediction penalty, for programs with a high misprediction rate. 1-hop neighbor's text information: Selective eager execution on the polypath architecture. : Control-flow misprediction penalties are a major impediment to high performance in wide-issue superscalar processors. In this paper we present Selective Eager Execution (SEE), an execution model to overcome mis-speculation penalties by executing both paths after diffident branches. We present the micro-architecture of the PolyPath processor, which is an extension of an aggressive superscalar, out-of-order architecture. The PolyPath architecture uses a novel instruction tagging and register renaming mechanism to execute instructions from multiple paths simultaneously in the same processor pipeline, while retaining maximum resource availability for single-path code sequences. Results of our execution-driven, pipeline-level simulations show that SEE can improve performance by as much as 36% for the go benchmark, and an average of 14% on SPECint95, when compared to a normal superscalar, out-of-order, speculative execution, monopath processor. Moreover, our architectural model is both elegant and practical to implement, using a small amount of additional state and control logic. Target text information: Multipath Execution: Opportunities and Limits: Even sophisticated branch-prediction techniques necessarily suffer some mispredictions, and even relatively small mispredict rates hurt performance substantially in current-generation processors. In this paper, we investigate schemes for improving performance in the face of imperfect branch predictors by having the processor simultaneously execute code from both the taken and not-taken outcomes of a branch. This paper presents data regarding the limits of multipath execution, considers fetch-bandwidth needs for multipath execution, and discusses various dynamic confidence-prediction schemes that gauge the likelihood of branch mispredictions. Our evaluations consider executing along several (28) paths at once. Using 4 paths and a relatively simple confidence predictor, multipath execution garners speedups of up to 30% compared to the single-path case, with an average speedup of 14.4% for the SPECint suite. While associated increases in instruction-fetch-bandwidth requirements are not too surprising, a less expected result is the significance of having a separate return-address stack for each forked path. Overall, our results indicate that multipath execution offers significant improvements over single-path performance, and could be especially useful when combined with multithreading so that hardware costs can be amortized over both approaches. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
568
val
1-hop neighbor's text information: Coordinating Reactive Behaviors keywords: reactive systems, planning and learning: Combinating reactivity with planning has been proposed as a means of compensating for potentially slow response times of planners while still making progress toward long term goals. The demands of rapid response and the complexity of many environments make it difficult to decompose, tune and coordinate reactive behaviors while ensuring consistency. Neural networks can address the tuning problem, but are less useful for decomposition and coordination. We hypothesize that interacting reactions can be decomposed into separate behaviors resident in separate networks and that the interaction can be coordinated through the tuning mechanism and a higher level controller. To explore these issues, we have implemented a neural network architecture as the reactive component of a two layer control system for a simulated race car. By varying the architecture, we test whether decomposing reactivity into separate behaviors leads to superior overall performance, coordination and learning convergence. 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. 1-hop neighbor's text information: On the Computational Economics of Reinforcement Learning. : Following terminology used in adaptive control, we distinguish between indirect learning methods, which learn explicit models of the dynamic structure of the system to be controlled, and direct learning methods, which do not. We compare an existing indirect method, which uses a conventional dynamic programming algorithm, with a closely related direct reinforcement learning method by applying both methods to an infinite horizon Markov decision problem with unknown state-transition probabilities. The simulations show that although the direct method requires much less space and dramatically less computation per control action, its learning ability in this task is superior to, or compares favorably with, that of the more complex indirect method. Although these results do not address how the methods' performances compare as problems become more difficult, they suggest that given a fixed amount of computational power available per control action, it may be better to use a direct reinforcement learning method augmented with indirect techniques than to devote all available resources to a computation-ally costly indirect method. Comprehensive answers to the questions raised by this study depend on many factors making up the eco nomic context of the computation. Target text information: Strategy Learning with Multilayer Connectionist Represent ations. : Results are presented that demonstrate the learning and fine-tuning of search strategies using connectionist mechanisms. Previous studies of strategy learning within the symbolic, production-rule formalism have not addressed fine-tuning behavior. Here a two-layer connectionist system is presented that develops its search from a weak to a task-specific strategy and fine-tunes its performance. The system is applied to a simulated, real-time, balance-control task. We compare the performance of one-layer and two-layer networks, showing that the ability of the two-layer network to discover new features and thus enhance the original representation is critical to solving the balancing task. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,164
test
1-hop neighbor's text information: Incremental evolution of complex general behavior. : Several researchers have demonstrated how complex action sequences can be learned through neuro-evolution (i.e. evolving neural networks with genetic algorithms). However, complex general behavior such as evading predators or avoiding obstacles, which is not tied to specific environments, turns out to be very difficult to evolve. Often the system discovers mechanical strategies (such as moving back and forth) that help the agent cope, but are not very effective, do not appear believable and would not generalize to new environments. The problem is that a general strategy is too difficult for the evolution system to discover directly. This paper proposes an approach where such complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general. The task transitions are implemented through successive stages of delta-coding (i.e. evolving modifications), which allows even converged populations to adapt to the new task. The method is tested in the stochastic, dynamic task of prey capture, and compared with direct evolution. The incremental approach evolves more effective and more general behavior, and should also scale up to harder tasks. 1-hop neighbor's text information: Efficient reinforcement learning through symbiotic evolution. : This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, efficient genetic search and discourages convergence to suboptimal solutions. In the inverted pendulum problem, SANE formed effective networks 9 to 16 times faster than the Adaptive Heuristic Critic and 2 times faster than Q-learning and the GENITOR neuro-evolution approach without loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications. 1-hop neighbor's text information: Symbiotic Evolution of Neural Networks in Sequential Decision Tasks. : Target text information: 2-D Pole Balancing with Recurrent Evolutionary Networks: The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. In this paper we present a more difficult version to the classic problem where the cart and pole can move in a plane. We demonstrate a neuroevolution system (Enforced Sub-Populations, or ESP) that can solve this difficult problem without velocity information. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
461
test
1-hop neighbor's text information: Systematic Evaluation of Design Decisions in CBR Systems: Two important goals in the evaluation of an AI theory or model are to assess the merit of the design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. This is particularly difficult in case-based reasoning systems because such systems are typically very complex, as are the tasks and domains in which they operate. We present a methodology for the evaluation of case-based reasoning systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. This methodology enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave in response to changing domain and problem characteristics. A case study of a mul-tistrategy case-based and reinforcement learning system which performs autonomous robotic navigation is presented as an example. 1-hop neighbor's text information: Towards formalizations in case-based reasoning for synthesis. : This paper presents the formalization of a novel approach to structural similarity assessment and adaptation in case-based reasoning (Cbr) for synthesis. The approach has been informally presented, exemplified, and implemented for the domain of industrial building design (Borner 1993). By relating the approach to existing theories we provide the foundation of its systematic evaluation and appropriate usage. Cases, the primary repository of knowledge, are represented structurally using an algebraic approach. Similarity relations provide structure preserving case modifications modulo the underlying algebra and an equational theory over the algebra (so available). This representation of a modeled universe of discourse enables theory-based inference of adapted solutions. The approach enables us to incorporate formally generalization, abstraction, geometrical transformation, and their combinations into Cbr. Target text information: Structure oriented case retrieval. Fourth German Workshop on Case-Based Reasoning: System Development and Evaluation (pp. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
498
test
1-hop neighbor's text information: Protein structure prediction: Selecting salient features from large candidate pools. : We introduce a parallel approach, "DT-Select," for selecting features used by inductive learning algorithms to predict protein secondary structure. DT-Select is able to rapidly choose small, nonredundant feature sets from pools containing hundreds of thousands of potentially useful features. It does this by building a decision tree, using features from the pool, that classifies a set of training examples. The features included in the tree provide a compact description of the training data and are thus suitable for use as inputs to other inductive learning algorithms. Empirical experiments in the protein secondary-structure task, in which sets of complex features chosen by DT-Select are used to augment a standard artificial neural network representation, yield surprisingly little performance gain, even though features are selected from very large feature pools. We discuss some possible reasons for this result. 1 1-hop neighbor's text information: Learning to represent codons: A challenge problem for constructive induction. : The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. Systems that perform constructive induction are able to change their representation by constructing new features. We describe an important, real-world problem finding genes in DNA that we believe offers an interesting challenge to constructive-induction researchers. We report experiments that demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decision trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for constructive induction because the relationship between the two representations is well known, and because the representational shift involved in construct ing the better representation is not imposing. 1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models Target text information: Learning to predict reading frames in E. coli DNA sequences. : Two fundamental problems in analyzing DNA sequences are (1) locating the regions of a DNA sequence that encode proteins, and (2) determining the reading frame for each region. We investigate using artificial neural networks (ANNs) to find coding regions, determine reading frames, and detect frameshift errors in E. coli DNA sequences. We describe our adaptation of the approach used by Uberbacher and Mural to identify coding regions in human DNA, and we compare the performance of ANNs to several conventional methods for predicting reading frames. Our experiments demonstrate that ANNs can outperform these conventional approaches. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,657
test
1-hop neighbor's text information: Improving elevator performance using reinforcement learning. : This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are nonstationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility. 1-hop neighbor's text information: Learning to Act using Real- Time Dynamic Programming. : fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526). 1-hop neighbor's text information: High-Performance Job-Shop Scheduling With A Time-Delay TD() Network. : Job-shop scheduling is an important task for manufacturing industries. We are interested in the particular task of scheduling payload processing for NASA's space shuttle program. This paper summarizes our previous work on formulating this task for solution by the reinforcement learning algorithm T D(). A shortcoming of this previous work was its reliance on hand-engineered input features. This paper shows how to extend the time-delay neural network (TDNN) architecture to apply it to irregular-length schedules. Experimental tests show that this TDNN-T D() network can match the performance of our previous hand-engineered system. The tests also show that both neural network approaches significantly outperform the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them. Target text information: Submitted to NIPS96, Section: Applications. Preference: Oral presentation Reinforcement Learning for Dynamic Channel Allocation in: In cellular telephone systems, an important problem is to dynamically allocate the communication resource (channels) so as to maximize service in a stochastic caller environment. This problem is naturally formulated as a dynamic programming problem and we use a reinforcement learning (RL) method to find dynamic channel allocation policies that are better than previous heuristic solutions. The policies obtained perform well for a broad variety of call traffic patterns. We present results on a large cellular system In cellular communication systems, an important problem is to allocate the communication resource (bandwidth) so as to maximize the service provided to a set of mobile callers whose demand for service changes stochastically. A given geographical area is divided into mutually disjoint cells, and each cell serves the calls that are within its boundaries (see Figure 1a). The total system bandwidth is divided into channels, with each channel centered around a frequency. Each channel can be used simultaneously at different cells, provided these cells are sufficiently separated spatially, so that there is no interference between them. The minimum separation distance between simultaneous reuse of the same channel is called the channel reuse constraint . When a call requests service in a given cell either a free channel (one that does not violate the channel reuse constraint) may be assigned to the call, or else the call is blocked from the system; this will happen if no free channel can be found. Also, when a mobile caller crosses from one cell to another, the call is "handed off" to the cell of entry; that is, a new free channel is provided to the call at the new cell. If no such channel is available, the call must be dropped/disconnected from the system. One objective of a channel allocation policy is to allocate the available channels to calls so that the number of blocked calls is minimized. An additional objective is to minimize the number of calls that are dropped when they are handed off to a busy cell. These two objectives must be weighted appropriately to reflect their relative importance, since dropping existing calls is generally more undesirable than blocking new calls. with approximately 70 49 states. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
157
test
1-hop neighbor's text information: Rule induction with CN2: some recent improvements. : The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. 1-hop neighbor's text information: Lookahead and discretization in ILP. : We present and evaluate two methods for improving the performance of ILP systems. One of them is discretization of numerical attributes, based on Fayyad and Irani's text [9], but adapted and extended in such a way that it can cope with some aspects of discretization that only occur in relational learning problems (when indeterminate literals occur). The second technique is lookahead. It is a well-known problem in ILP that a learner cannot always assess the quality of a refinement without knowing which refinements will be enabled afterwards, i.e. without looking ahead in the refinement lattice. We present a simple method for specifying when lookahead is to be used, and what kind of lookahead is interesting. Both the discretization and lookahead techniques are evaluated experimentally. The results show that both techniques improve the quality of the induced theory, while computational costs are acceptable. 1-hop neighbor's text information: Inductive constraint logic and the mutagenesis problem. : A novel approach to learning first order logic formulae from positive and negative examples is incorporated in a system named ICL (Inductive Constraint Logic). In ICL, examples are viewed as interpretations which are true or false for the target theory, whereas in present inductive logic programming systems, examples are true and false ground facts (or clauses). Furthermore, ICL uses a clausal representation, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form. We present some experiments with this new system on the mutagenesis problem. These experiments illustrate some of the differences with other systems, and indicate that our approach should work at least as well as the more classical approaches. Target text information: Multi-class problems and discretization in ICL (extended abstract). : Handling multi-class problems and real numbers is important in practical applications of machine learning to KDD problems. While attribute-value learners address these problems as a rule, very few ILP systems do so. The few ILP systems that handle real numbers mostly do so by trying out all real values that are applicable, thus running into efficiency or overfitting problems. This paper discusses some recent extensions of ICL that address these problems. ICL, which stands for Inductive Constraint Logic, is an ILP system that learns first order logic formulae from positive and negative examples. The main charateristic of ICL is its view on examples. These are seen as interpretations which are true or false for the clausal target theory (in CNF). We first argue that ICL can be used for learning a theory in a disjunctive normal form (DNF). With this in mind, a possible solution for handling more than two classes is given (based on some ideas from CN2). Finally, we show how to tackle problems with continuous values by adapting discretization techniques from attribute value learners. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,418
test
1-hop neighbor's text information: editors. Machine Learning, Meta-Reasoning and Logics. : Systems interacting with real-world data must address the issues raised by the possible presence of errors in the observations it makes. In this paper we first present a framework for discussing imperfect data and the resulting problems it may cause. We distinguish between two categories of errors in data random errors or `noise', and systematic errors and examine their relationship to the task of describing observations in a way which is also useful for helping in future problem-solving and learning tasks. Secondly we proceed to examine some of the techniques currently used in AI research for recognising such errors. 1-hop neighbor's text information: (1990b) : "Knowledge Integration and Learning", : LIACC - Technical Report 91-1 Abstract. In this paper we address the problem of acquiring knowledge by integration . Our aim is to construct an integrated knowledge base from several separate sources. The objective of integration is to construct one system that exploits all the knowledge that is available and has good performance. The aim of this paper is to discuss the methodology of knowledge integration and present some concrete results. In our experiments the performance of the integrated theory exceeded the performance of the individual theories by quite a significant amount. Also, the performance did not fluctuate much when the experiments were repeated. These results indicate knowledge integration can complement other existing ML methods. Target text information: (1990) : Knowledge Acquisition via Knowledge Integration. In Current Trends in Knowledge Acquisition . Wielinga,B. : In this paper we are concerned with the problem of acquiring knowledge by integration. Our aim is to construct an integrated knowledge base from several separate sources. The need to merge knowledge bases can arise, for example, when knowledge bases are acquired independently from interactions with several domain experts. As opinions of different domain experts may differ, the knowledge bases constructed in this way will normally differ too. A similar problem can also arise whenever separate knowledge bases are generated by learning algorithms. The objective of integration is to construct one system that exploits all the knowledge that is available and has a good performance. The aim of this paper is to discuss the methodology of knowledge integration, describe the implemented system (INTEG.3), and present some concrete results which demonstrate the advantages of this method. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,566
test
1-hop neighbor's text information: "Using genetic algorithms to explore pattern recognition in the immune system," : We describe an immune system model based on a universe of binary strings. The model is directed at understanding the pattern recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of our model. In the paper we study the behavior of the GA on two pattern recognition problems that are relevant to natural immune systems. Finally, we compare our model with explicit fitness sharing techniques for genetic algorithms, and show that our model implements a form of implicit fitness sharing. 1-hop neighbor's text information: "A Coevolutionary Approach to Learning Sequential Decision Rules", : We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions. 1-hop neighbor's text information: A cooperative coevolutionary approach to function optimization. : A general model for the coevolution of cooperating species is presented. This model is instantiated and tested in the domain of function optimization, and compared with a traditional GA-based function optimizer. The results are encouraging in two respects. They suggest ways in which the performance of GA and other EA-based optimizers can be improved, and they suggest a new approach to evolving complex structures such as neural networks and rule sets. Target text information: De Jong and John J. : A cooperative coevolutionary approach to learning complex structures is presented which, although preliminary in nature, appears to have a number of advantages over non-coevolutionary approaches. The cooperative coevolutionary approach encourages the parallel evolution of substructures which interact in useful ways to form more complex higher level structures. The architecture is designed to be general enough to permit the inclusion, if appropriate, of a priori knowledge in the form of initial biases towards particular kinds of decompositions. A brief summary of initial results obtained from testing this architecture in several problem domains is presented which shows a significant speedup over more traditional non-coevolutionary approaches. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,153
test
1-hop neighbor's text information: Learning acyclic first-order horn sentences from entailment. : In this paper, we consider learning first-order Horn programs from entailment. In particular, we show that any subclass of first-order acyclic Horn programs with constant arity is exactly learnable from equivalence and entailment membership queries provided it allows a polynomial-time subsumption procedure and satisfies some closure conditions. One consequence of this is that first-order acyclic determinate Horn programs with constant arity are exactly learnable from equiv alence and entailment membership queries. 1-hop neighbor's text information: Learning goal-decomposition rules using exercises. : Exercises are problems ordered in increasing order of difficulty. Teaching problem-solving through exercises is a widely used pedagogic technique. A computational reason for this is that the knowledge gained by solving simple problems is useful in efficiently solving more difficult problems. We adopt this approach of learning from exercises to acquire search-control knowledge in the form of goal-decomposition rules (d-rules). D-rules are first order, and are learned using a new "generalize-and-test" algorithm which is based on inductive logic programming techniques. We demonstrate the feasibility of the approach by applying it in two planning do mains. Target text information: Learning Horn definitions using equivalence and membership queries. : A Horn definition is a set of Horn clauses with the same head literal. In this paper, we consider learning non-recursive, function-free first-order Horn definitions. We show that this class is exactly learnable from equivalence and membership queries. It follows then that this class is PAC learnable using examples and membership queries. Our results have been shown to be applicable to learning efficient goal-decomposition rules in planning domains. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,023
val
1-hop neighbor's text information: Causality in genetic programming. : Machine learning aims towards the acquisition of knowledge based on either experience from the interaction with the external environment or by analyzing the internal problem-solving traces. Both approaches can be implemented in the Genetic Programming (GP) paradigm. [Hillis, 1990] proves in an ingenious way how the first approach can work. There have not been any significant tests to prove that GP can take advantage of its own search traces. This paper presents an approach to automatic discovery of functions in GP based on the ideas of discovery of useful building blocks by analyzing the evolution trace, generalizing of blocks to define new functions and finally adapting of the problem representation on-the-fly. Adaptation of the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Complexity measures of solution trees are defined for an adaptive representation framework and empirical results are presented. This material is based on work supported by the National Science Foundation under Grant numbered IRI-8903582 by NIH/PHS research grant numbered 1 R24 RR06853-02 and by a Human Science Frontiers Program research grant. The government has certain rights in this material. 1-hop neighbor's text information: "Coevolving High Level Representations," : 1-hop neighbor's text information: An Analysis of Genetic Programming, : In this paper we carefully formulate a Schema Theorem for Genetic Programming (GP) using a schema definition that accounts for the variable length and the non-homologous nature of GP's representation. In a manner similar to early GA research, we use interpretations of our GP Schema Theorem to obtain a GP Building Block definition and to state a "classical" Building Block Hypothesis (BBH): that GP searches by hierarchically combining building blocks. We report that this approach is not convincing for several reasons: it is difficult to find support for the promotion and combination of building blocks solely by rigourous interpretation of a GP Schema Theorem; even if there were such support for a BBH, it is empirically questionable whether building blocks always exist because partial solutions of consistently above average fitness and resilience to disruption are not assured; also, a BBH constitutes a narrow and imprecise account of GP search behavior. Target text information: "Genetic Programming Exploratory Power and the Discovery of Functions," : Hierarchical genetic programming (HGP) approaches rely on the discovery, modification, and use of new functions to accelerate evolution. This paper provides a qualitative explanation of the improved behavior of HGP, based on an analysis of the evolution process from the dual perspective of diversity and causality. From a static point of view, the use of an HGP approach enables the manipulation of a population of higher diversity programs. Higher diversity increases the exploratory ability of the genetic search process, as demonstrated by theoretical and experimental fitness distributions and expanded structural complexity of individuals. From a dynamic point of view, an analysis of the causality of the crossover operator suggests that HGP discovers and exploits useful structures in a bottom-up, hierarchical manner. Diversity and causality are complementary, affecting exploration and exploitation in genetic search. Unlike other machine learning techniques that need extra machinery to control the tradeoff between them, HGP automatically trades off exploration and exploitation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,137
train
1-hop neighbor's text information: Bayesian model averaging. : Technical Report no. 302 Department of Statistics University of Washington 1 Chris Volinsky is a Research Assistant, David Madigan is a Professor of Statistics and Adrian E. Raftery is a Professor of Statistics and Sociology, Department of Statistics, Box 354322, University of Washington, Seattle, WA 98195. Richard A. Kronmal is a Professor of Biostatistics, Box 357232, University of Washington, Seattle, WA 98195. Email correspondence: [email protected] Target text information: An Improved Model for Spatially Correlated Binary Responses: In this paper we extend the basic autologistic model to include covariates and an indication of sampling effort. The model is applied to sampled data instead of the traditional use for image analysis where complete data are available. We adopt a Bayesian set-up and develop a hybrid Gibbs sampling estimation procedure. Using simulated examples, we show that the autologistic model with covariates for sample data improves predictions as compared to the simple logistic regression model and the standard autologistic model (without covariates). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
617
val
1-hop neighbor's text information: "A Study in Program Response and the Negative Effects of Introns in Genetic Programming," : The standard method of obtaining a response in tree-based genetic programming is to take the value returned by the root node. In non-tree representations, alternate methods have been explored. One alternative is to treat a specific location in indexed memory as the response value when the program terminates. The purpose of this paper is to explore the applicability of this technique to tree-structured programs and to explore the intron effects that these studies bring to light. This paper's experimental results support the finding that this memory-based program response technique is an improvement for some, but not all, problems. In addition, this paper's experimental results support the finding that, contrary to past research and speculation, the addition or even facilitation of introns can seriously degrade the search performance of genetic programming. 1-hop neighbor's text information: Fitness causes bloat in variable size representations. : We argue based upon the numbers of representations of given length, that increase in representation length is inherent in using a fixed evaluation function with a discrete but variable length representation. Two examples of this are analysed, including the use of Price's Theorem. Both examples confirm the tendency for solutions to grow in size is caused by fitness based selection. 1-hop neighbor's text information: Evolving compact solutions in genetic programming: A case study. : Genetic programming (GP) is a variant of genetic algorithms where the data structures handled are trees. This makes GP especially useful for evolving functional relationships or computer programs, as both can be represented as trees. Symbolic regression is the determination of a function dependence y = g(x) that approximates a set of data points (x i ; y i ). In this paper the feasibility of symbolic regression with GP is demonstrated on two examples taken from different domains. Furthermore several suggested methods from literature are compared that are intended to improve GP performance and the readability of solutions by taking into account introns or redundancy that occurs in the trees and keeping the size of the trees small. The experiments show that GP is an elegant and useful tool to derive complex functional dependencies on numerical data. Target text information: Causality in genetic programming. : Machine learning aims towards the acquisition of knowledge based on either experience from the interaction with the external environment or by analyzing the internal problem-solving traces. Both approaches can be implemented in the Genetic Programming (GP) paradigm. [Hillis, 1990] proves in an ingenious way how the first approach can work. There have not been any significant tests to prove that GP can take advantage of its own search traces. This paper presents an approach to automatic discovery of functions in GP based on the ideas of discovery of useful building blocks by analyzing the evolution trace, generalizing of blocks to define new functions and finally adapting of the problem representation on-the-fly. Adaptation of the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Complexity measures of solution trees are defined for an adaptive representation framework and empirical results are presented. This material is based on work supported by the National Science Foundation under Grant numbered IRI-8903582 by NIH/PHS research grant numbered 1 R24 RR06853-02 and by a Human Science Frontiers Program research grant. The government has certain rights in this material. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,983
test
1-hop neighbor's text information: Evolution of non-deterministic incremental algorithms as a new approach for search in state spaces. : Let us call a non-deterministic incremental algorithm one that is able to construct any solution to a combinatorial problem by selecting incrementally an ordered sequence of choices that defines this solution, each choice being made non-deterministically. In that case, the state space can be represented as a tree, and a solution is a path from the root of that tree to a leaf. This paper describes how the simulated evolution of a population of such non-deterministic incremental algorithms offers a new approach for the exploration of a state space, compared to other techniques like Genetic Algorithms (GA), Evolutionary Strategies (ES) or Hill Climbing. In particular, the efficiency of this method, implemented as the Evolving Non-Determinism (END) model, is presented for the sorting network problem, a reference problem that has challenged computer science. Then, we shall show that the END model remedies some drawbacks of these optimization techniques and even outperforms them for this problem. Indeed, some 16-input sorting networks as good as the best known have been built from scratch, and even a 25-year-old result for the 13-input problem has been improved by one comparator. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: A unified gradient-descent/clustering algorithm architecture for finite state machine induction. : Although recurrent neural nets have been moderately successful in learning to emulate finite-state machines (FSMs), the continuous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learning progresses. dolce consists of a standard recurrent neural net trained by gradient descent and an adaptive clustering technique that quantizes the state space. dolce is based on the assumption that a finite set of discrete internal states is required for the task, and that the actual network state belongs to this set but has been corrupted by noise due to inaccuracy in the weights. dolce learns to recover the discrete state with maximum a posteriori probability from the noisy state. Simulations show that dolce leads to a significant improvement in generalization performance over earlier neural net approaches to FSM induction. Target text information: A Stochastic Search Approach to Grammar Induction: This paper describes a new sampling-based heuristic for tree search named SAGE and presents an analysis of its performance on the problem of grammar induction. This last work has been inspired by the Abbadingo DFA learning competition [14] which took place between Mars and November 1997. SAGE ended up as one of the two winners in that competition. The second winning algorithm, first proposed by Rod-ney Price, implements a new evidence-driven heuristic for state merging. Our own version of this heuristic is also described in this paper and compared to SAGE. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
565
test
1-hop neighbor's text information: Vuurpijl and Th.E. Schouten. A Scalable Performance Prediction Model for Parallel Neural Network Simulations. : A performance prediction method is presented for indicating the performance range of MIMD parallel processor systems for neural network simulations. The total execution time of a parallel application is modeled as the sum of its calculation and communication times. The method is scalable because based on the times measured on one processor and one communication link, the performance, speedup, and efficiency can be predicted for a larger processor system. It is validated quantitatively by applying it to two popular neural networks, backpropagation and the Kohonen self-organizing feature map, decomposed on a GCel-512, a 512 transputer system. Agreement of the model with the measurements is within 9%. 1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. Target text information: Performance Prediction of Large MIMD Systems for Parallel Neural Network Simulations. : In this paper, we present a performance prediction model for indicating the performance range of MIMD parallel processor systems for neural network simulations. The model expresses the total execution time of a simulation as a function of the execution times of a small number of kernel functions, which have to be measured on only one processor and one physical communication link. The functions depend on the type of neural network, its geometry, decomposition and the connection structure of the MIMD machine. Using the model, the execution time, speedup, scalability and efficiency of large MIMD systems can be predicted. The model is validated quantitatively by applying it to two popular neural networks, backpropagation and the Kohonen self-organizing feature map, decomposed on a GCel-512 1 , a 512 transputer system. Measurements are taken from network simulations decomposed via dataset and network decomposition techniques. Agreement of the model with the measurements is within 1%-14%. Estimates are given for the performances that can be expected for the new T9000 transputer systems. The presented method can also be used for other application areas such as image processing. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,779
test
1-hop neighbor's text information: Markov Chain Monte Carlo Model Determination for Hierarchical and Graphical Models. : The Bayesian approach to comparing models involves calculating the posterior probability of each plausible model. For high-dimensional contingency tables, the set of plausible models is very large. We focus attention on reversible jump Markov chain Monte Carlo (Green, 1995) and develop strategies for calculating posterior probabilities of hierarchical, graphical or decomposable log-linear models. Even for tables of moderate size, these sets of models may be very large. The choice of suitable prior distributions for model parameters is also discussed in detail, and two examples are presented. For the first example, a 2 fi 3 fi 4 table, the model probabilities calculated using our reversible jump approach are compared with model probabilities calculated exactly or by using an alternative approximation. The second example is a 2 6 contingency table for which exact methods are infeasible, due to the large number of possible models. 1-hop neighbor's text information: Bayesian inference for nondecomposable graphical Gaussian models: In this paper we propose a method to calculate the posterior probability of a nondecomposable graphical Gaussian model. Our proposal is based on a new device to sample from Wishart distributions, conditional on the graphical constraints. As a result, our methodology allows Bayesian model selection within the whole class of graphical Gaussian models, including nondecomposable ones. 1-hop neighbor's text information: Accounting for model uncertainty in survival analysis improves predictive performance (with Discussion). In Bayesian Statistics 5 (J.M. : Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the model-building process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful. Target text information: Model selection and accounting for model uncertainty in graphical models using Occam\'s window. : We consider the problem of model selection and accounting for model uncertainty in high-dimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic P -values leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 10 11 ). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty by averaging over a much smaller set of models. An efficient search algorithm is developed for finding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable fl David Madigan is Assistant Professor of Statistics and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. Madigan's research was partially supported by the Graduate School Research Fund, University of Washington and by the NSF. Raftery's research was supported by ONR Contract no. N-00014-91-J-1074. The authors are grateful to Gregory Cooper, Leo Goodman, Shelby Haberman, David Hinkley, Graham Upton, Jon Wellner, Nanny Wermuth, Jeremy York, Walter Zucchini and two anonymous referees for helpful comments and discussions, and to Michael R. Butler for providing the data for the scrotal swellings example. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,620
test