Free download. Book file PDF easily for everyone and every device. You can download and read online Competitively Inhibited Neural Networks for Adaptive Parameter Estimation file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Competitively Inhibited Neural Networks for Adaptive Parameter Estimation book. Happy reading Competitively Inhibited Neural Networks for Adaptive Parameter Estimation Bookeveryone. Download file Free Book PDF Competitively Inhibited Neural Networks for Adaptive Parameter Estimation at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Competitively Inhibited Neural Networks for Adaptive Parameter Estimation Pocket Guide.

  • The Opium of the People?
  • Piano-Guided Sight Reading.
  • ISBN 13: 9780792390862;
  • Catalog Record: Seismic event picking for velocity estimation | HathiTrust Digital Library.
  • Cracking The Hidden Job Market: How to Find Opportunity in Any Economy.
  • The Port Chicago 50: Disaster, Mutiny, and the Fight for Civil Rights;
  • Competitively Inhibited Neural Networks for Adaptive Parameter Estimation - Semantic Scholar?

FAQ Help. Member Log-In. E-mail address. Remember me. Account recovery. You are in page, help us by providing your feedback on different features. Select feature Accessibility Type Metadata. Select feature Registration Log-in Account recovery Profile customization.

New representations gained popularity, such as the graphical programs in Cartesian genetic programming 15 , 16 and the implicit encoding of connectivity in analogue genetic encoding 17 , which is inspired by genetic regulatory networks. As the scope of attributes to evolve broadened, so did the need for algorithmic advancements to support the more ambitious ends. For example, the shift from evolving fixed topologies to increasingly complex ones created new challenges like crossing over structures that is, combining the structures of two parent networks to create a parsimonious offspring network with different topologies and protecting more complex structures from dying out of the population before allowing enough time for their weights to be optimized to reveal their true potential.

It addressed the problem of crossing over variable topologies through historical marking which tells the crossover operator which parts of two neural networks are similar and can thus be swapped and prevented premature extinction of augmented structures through a mechanism called speciation. Solving these problems made evolving increasingly complex topologies more effective. The early successes in the field often concerned evolving neural network controllers for robots, known as evolutionary robotics 19 , One prominent success was to produce the first running gait for the Sony Aibo robot Another was evolving the neural networks and morphologies of robots that were 3D-printed and could move around in the real world Notable accomplishments outside of evolutionary robotics include helping to discover through NEAT the most accurate measurement yet of the mass of the top quark, which was achieved at the Tevatron particle collider Neuroevolution also enabled some innovative video game concepts, such as evolving new content in real time while the game is played 24 or allowing the player to train non-player characters as part of the game through neuroevolution Neuroevolution has also been used to study open questions in evolutionary biology, such as the origins of the regularity, modularity and hierarchy found in biological networks like the neural networks in animal brains 26 , Although impressive, especially in their day, all of these successful applications involved tiny neural networks by modern standards, composed of hundreds or thousands of connections instead of the millions of connections commonly seen in modern deep neural network DNN research.

A natural question is whether evolution is up to the task of evolving such large DNNs, which we address next. An intriguing historical pattern is that many classic machine learning algorithms perform qualitatively different, and far better, once they are scaled to take advantage of the vast computing resources now available. The best-known example is that of deep learning. The algorithms and ideas for how to train neural networks, namely backpropagation 5 coupled with optimization tricks for example momentum 28 , 29 and important architectural motifs for example convolution 30 or long short-term memory units LSTMs 31 , have been known for decades 1.

No customer reviews

Before about , these algorithms did not perform well for neural networks with more than a few layers 1 , 2. However, once combined with faster, modern computers, including the speedups provided by graphics processing units GPUs , and paired with large datasets, these algorithms produced great improvements in performance, which have generated most of the recent excitement about, and investment in, AI 1 , 2 , 32 , 33 , Research in recent years has similarly shown that neuroevolution algorithms also perform far better when scaled to take advantage of modern computing resources.

As described next, scientists have found that neuroevolution is a competitive alternative to gradient-based methods for training deep neural networks for reinforcement learning problems.

Online Parameter Estimation and Adaptive Control

These results are important because they also foreshadow the potential for neuroevolution to make an impact across the spectrum of neural network optimization problems, but now at modern scale, including cases such as architecture search where differentiation as used in most conventional deep learning is not a clear solution. Reinforcement learning involves AI agents learning by trial and error in an environment without direct supervision.

Instead, they try different action sequences, receive infrequent rewards for those actions and must learn from this sparse feedback which future actions will maximize reward 4. This type of learning is more challenging than supervised learning, in which the correct output for each input is given during training, and the main challenge is learning that mapping in a way that generalizes.

Competitively Inhibited Neural Networks for Adaptive Parameter Estimation

Reinforcement learning, in contrast, requires exploring the environment to try to discover the optimal actions to take, including figuring out which actions lead to rewarding events, sometimes when the relevant actions and the rewards that they generate are separated by long time horizons, which is known as the credit-assignment problem 4. Although algorithms have existed for decades to train reinforcement learning agents in problems with low-dimensional input spaces 4 , there has recently been a surge of progress and interest in deep reinforcement learning, which involves DNNs that learn to sense and act in high-dimensional state spaces for example raw visual streams that involve thousands or more pixel values per frame of video.

The results that have had particularly large impact are that deep reinforcement learning algorithms can learn to play many different Atari video games 3 and learn how to make simulated robots walk 35 , 36 , In a surprise to many, Salimans et al. The NES in ref. The surprise was that an evolutionary algorithm could compete with gradient-based methods in such high-dimensional parameter spaces.

However, because NES can be interpreted as a gradient-based method it estimates a gradient in parameter space and takes a step in that direction , many did not conclude from this work that a pure gradient-free evolutionary algorithm can operate at DNN scale. That changed with the result that a simple genetic algorithm was also competitive with DQN and A3C and evolution strategy on Atari games and outperformed them on many games Moreover, on a subset of games, the genetic algorithm even outperformed later, more powerful versions of these algorithms 43 , The genetic algorithm is entirely gradient-free in that it contains a population of agents each a DNN parameter vector that are independently mutated, and that reproduce more if their performance is better relative to others in the population.

Both Salimans et al. In some cases, and compared to some algorithms for example DQN, but not A3C 42 , evolutionary algorithms can be less sample efficient, but because they are extremely parallelizable, they can run far faster in real wall clock time for example hours instead of days , albeit at the cost of requiring more computing resources 38 , Both ref. Specifically, Salimans et al. Such et al. Mania et al. That neuroevolution performs well for controlling robots is not surprising, given its long history of success in the field of evolutionary robotics 19 , 20 , 21 , 46 , Interest is also growing in ways to hybridize the gradient-based methods of deep learning with neuroevolution.

Lehman et al.

Kumar, B. V. K. Vijaya [WorldCat Identities]

In neuroevolution, random mutations are made to the policy the mapping from inputs to actions, here represented by a DNN. Some of these mutations may have no effect on the behaviour policy of the network, and others might have major and thus usually catastrophic consequences on the policy for example always outputting the same action.

The insight behind safe mutations is that we can keep a reference library of states and actions, and incurring only the slight cost of a forward and backward pass use gradient information to scale the per-weight magnitude of mutations to make changes to the policy on the reference set that are neither too large nor too small. Another hybridization that has been proposed runs variants of gradient-based reinforcement learning as the engine behind crossover and mutation operators within a neuroevolution algorithm Still another direction, which has been shown to perform well, combines the style of evolutionary algorithms which search directly in the space of neural network parameters with the style of policy gradient and Q-learning algorithms which search in the space of actions and then change neural network parameters via backpropagation to make profitable actions more likely by creating random parameter perturbations to drive consistent exploration like evolutionary algorithms , but then reinforcing successful actions into weight parameters via backpropagation 50 , What is exciting about the successes described so far is that they were achieved with simple neuroevolution algorithms.

However, the neuroevolution community has invented many sophisticated techniques that can greatly improve the performance of these simple algorithms. Many are based on the observation that evolution, both in its natural and computational instantiations, is an exciting engine of innovation 52 , and these more modern techniques attempt to recreate that creativity algorithmically to search for better neural networks. As we discuss below, work has already begun that ports many of these ideas, including those that encourage diversity, novelty and intrinsic motivation 42 , 53 , 54 , and these enhancements are improving performance.

Other important ideas covered in this article include indirect encoding, a method for encoding very large structures 55 , and the evolution of architectures 56 , 57 for networks trained by gradient descent. Continuing to test the best ideas from the neuroevolution community at the scale of deep neural networks with modern amounts of computing power and data is likely to yield considerable additional advances.

Moreover, combining such ideas with those from deep learning and deep reinforcement learning is a research area that should continue to deliver many breakthroughs. Each of the next sections describes what we consider to be the most exciting ideas from the neuroevolution community in the hope of encouraging researchers to experiment with them at DNN scales and to blend them with ideas from traditional machine learning.

A hallmark of natural evolution is the amazing diversity of complex, functional organisms it has produced—from the intricate machinery of single-cell life to the massive collaborative union of cells that form animals of all sorts, including humans. In addition to being interesting in its own right, this massive parallel exploration of ways of life was probably critical for the evolution of human intelligence, because diversity is what makes innovation possible 58 , Thus a similar drive towards diversity is important when considering neuroevolution as a possible route to human-level AI.

For these reasons, neuroevolution and evolutionary computation as a whole have long focused on diversity 60 , Indeed, by adapting a population of solutions, evolutionary algorithms are naturally suited to parallel exploration of diverse solutions. The idea is that if search has converged to a local optimum, then encouraging exploration away from that optimum may be enough to uncover a new promising gradient of improvement.

Representative approaches include crowding 62 , in which a new individual replaces the one most genetically similar to it, and explicit fitness-sharing 60 , in which individuals are clustered by genetic distance and are punished by how many members are in their cluster. Although sometimes effective, such parameter-space diversity often fails to produce a wide diversity of different behaviours 63 , because there are infinite ways to set neural network weights that instantiate the same behaviour, owing to function-preserving rescaling of weights 64 , permuting nodes 65 or redundant mappings for example, many different weight settings can cause a robot to fall down immediately.

In other words, while it is trivial to generate diverse but similarly behaving parameter vectors, escaping from local optima often requires exploration of diverse behaviours 63 , as biological evolution does, and as is important in animal 66 and human problem solving As a result of this limitation to genetic diversity, more recent approaches directly reward a diversity of behaviours 63 , 68 , and further research has led to related ideas such as directly evolving for desired qualities such as curiosity 54 , evolvability 69 or generating surprise A representative approach 68 involves a multi-objective evolutionary algorithm 71 , 72 that rewards individuals both for increasing their fitness and for diverging from other individuals in experimenter-specified characterizations of behaviour in the domain.

In this way, the search can organically push different individuals in the population towards different trade-offs between exploring in relevant behavioural dimensions and optimizing performance. One helpful step in developing new algorithms that explore the breadth of potential diversification techniques is to break out of the box wherein evolution is viewed mainly as an optimizer.

Biological evolution is unlike optimization in the sense that it does not strive towards any particular organism. Indeed, one of its fundamental mechanisms is an accumulation of diverse novelty, bringing into question whether optimizing for a single optimal individual captures what enables evolution to discover rich and complex behaviour.


This alternate point of view recognizes that diversity is the premier product of evolution In other words, the search algorithm includes no pressure towards greater improvement according to a traditional fitness or performance measure. The idea is that as a whole, the population will spread over generations of evolution to span a wide range of behaviours. Although the gradient of divergence in the genetic space can be uninformative because many different genomes can produce the same uninteresting behaviour , the gradient of behavioural novelty often contains useful domain information.

In other words, to do something new often requires learning skills that respect the constraints of a domain; for example, learning to perform a new skateboard trick requires the balance and coordination that might be gained just by riding around. Indeed, in some reinforcement learning domains, searching only for behavioural novelty outperforms goal-directed search 53 , Although first applied to small networks, novelty search has recently been demonstrated to scale to high-dimensional reinforcement learning problems, where it improves performance 42 , 53 , providing another example of how ideas from the neuroevolution community too can benefit from modern amounts of computation.

In quality diversity, an algorithm is designed to illuminate the diversity of possible high-quality solutions to a problem—just as evolution has uncovered well-adapted organisms across countless environmental niches. Examples of early quality diversity algorithms include novelty search with local competition NSLC 74 and the multi-dimensional archive of phenotypic elites MAP-Elites 75 , which provide different ways to integrate a pressure to perform well within a diversifying search. NSLC modifies a multi-objective evolutionary algorithm, enabling optimizing a population for both diverse and locally optimal individuals individuals that are well-performing relative to similar strategies.

MAP-Elites is a simple but powerful algorithm that subdivides a space of possible behaviours into discrete niches, each containing a single champion that is the highest-performing agent of that type found so far. Competition is enforced only locally, but mutation to a parent from one niche can produce a new champion in another niche, enabling exaptation-like effects: that is, becoming high-performing in one niche may be a stepping stone to success in another.

The product of such algorithms is often called a repertoire: that is, a collection of diverse yet effective options rather than a single optimal solution. In a result published in Nature , MAP-Elites was applied to discover such a diverse repertoire of high-performing walking gaits, so that after being damaged a robot could quickly recover by searching for the best of these champion gaits that worked despite the damage The space of quality diversity algorithms continues to expand 76 , 77 , 78 , 79 , 80 and is an exciting area of current research.

Although much progress has been made in diversity-driven neuroevolution algorithms, there remains a considerable qualitative gap between the complexity of what nature discovers and the current products of evolutionary algorithms. Such a gap hints that there are breakthroughs in this area yet to be made. With about trillion connections and billion neurons 81 , the human brain far exceeds the size of any modern neural network.

Situated within its expanse is an intricate architecture of modules and patterns of connectivity that underpin human intelligence. A fascinating question is how this astronomical structure is encapsulated within our DNA-based genetic code, whose capacity is only about 30, genes or 3 billion base pairs Learning, of course, is a critical part of the story, but there is still a tremendous amount of information encoded by the genome regarding the overall architecture how many neurons there are, their modular components, which modules are wired to which other modules and so on The rules that govern how learning occurs is also part of the specification.

The need to encode all these components requires regularity that is, the reuse of structural motifs and the compression that it enables, so that the genome can be reasonably compact.

  • Neurosonology and Neuroimaging of Stroke.
  • Designing neural networks through neuroevolution;
  • Epub Competitively Inhibited Neural Networks For Adaptive Parameter Estimation.
  • Epub Competitively Inhibited Neural Networks For Adaptive Parameter Estimation?
  • Lonely Planet Scotland.

Interestingly, regularity provides powerful computational advantages for neural structures as well. For example, the power of regular structure is familiar to anyone with experience in deep learning through the success of convolution Convolution is a particular regular pattern of connectivity, wherein the same feature detector is situated at many locations in the same layer.

Convolution was designed by hand as a heuristic solution to the problem of capturing translation-invariant features at different levels of hierarchy 2. This simple regularity has proven so powerful as to become nearly ubiquitous across the successful modern architectures of deep learning 2 , However, neuroevolution raises the prospect that the identification of powerful regularities need not fall ultimately to the hands of human designers. This prospect connects naturally also to the potential of compressed encoding to describe vast architectures composed of extensive regularities beyond convolution.

For example, a larger palette of regularities could include various symmetries bilateral or radial, for instance as well as gradients along which filters vary according to a regular principle such as becoming smaller towards the periphery. Ultimately it would be ideal if machine learning could discover such patterns, including convolution, on its own, without requiring and being limited by the cleverness of a designer or the reverse-engineering capabilities of a neuroscientist.

Motivated by the compression of DNA in nature, research in indirect encoding stretches back decades to experiments 85 , 86 in pattern formation. Later researchers explored evolvable encodings for a wide range of structures from blobs of artificial cells to robot morphologies to neural networks 55 , including influential work by Gruau 7 , Bongard and Pfeifer 87 , and Hornby and Pollack A popular modern indirect encoding in neuroevolution is compositional pattern-producing networks CPPNs CPPNs function similarly to neural networks, but their inspiration comes instead from developmental biology, where structure is situated and built within a geometric space.

For example, early in the development of the embryo, chemical gradients help to define axes from head to tail, front to back, and left to right That way, structures such as arms and legs can be situated in their correct positions. Furthermore, within such structures are substructures, such as the fingers of the hand which themselves must be placed within the local coordinate system of the hand.

All of this configuration happens in biological systems through cells producing and reacting to diffusing chemicals called morphogens, which would be extremely computationally expensive to simulate. CPPNs abstract this process into a simple network of function compositions that can be represented as a graph. At the input layer, the primary axes for example x and y for a two-dimensional structure are input into the network, serving as the base coordinate system.

From there, a small set of activation functions that abstract common structural motifs within developing embryos is composed to yield more complex patterns For example, a Gaussian function elicits the equivalent of a symmetric chemical gradient, a sigmoid generates an asymmetric one, and a sine wave recalls segmentation. When such functions are composed with each other within a weighted network like a special kind of neural network; Fig. Traditionally CPPNs are evolved with the NEAT 18 algorithm, which allows architectures of increasing complexity to evolve starting from a very simple initial form.