Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
21 pages
1 file
A neuron is commonly modeled by synapses, represented by Wi, connected to input signals, represented by Xi. These inputs are either inputs to the network, or outputs from previous neurons. A neuron will sum these inputs and calculate the output based on an activation function.
Computer, 1996
Parallelism and distribution have been considered the key features of neural processing. The term parallel distributed processing is even used as a synonym for arti cial neural networks. Nevertheless, the actual implementations are still in search of the appropriate model to "naturally represent" neural computing. And thenal judgement is always given in performance gures { keeping the parallelization issue high on the neurosimulation agenda. Two approaches have yielded the best results: parallel simulations on general-purpose computers, and specially developed neurohardware. Programming neural networks on parallel machines requires high-level techniques re ecting both inherent features of neuromodels and characteristics of the underlying computers. On the other hand, emulation of the neuroparadigm requires that the functioning of neural operations be mimicked directly by the hardware. Both approaches are presented, and their advantages and shortcomings are outlined.
The simulation of neural networks is a very time consuming process due to the complex iterative process involved. However, parallel neural networks are very good candidates for simulation by distributed computing systems because of their inherent parallelism. This paper presents the design of applications capable of simulating parallel neural networks. In the first step, the design issues of such an approach are discussed, along with the parallelization techniques that can be applied. Then, typical parallelization examples are presented for certain important network types, such as Backpropagation networks, Self Organizing Maps (SOMs) and Radial Basis Functions (RBFs).
A neural network is a collection of neurons that are interconnected and interactive through signal processing operations. The traditional term "neural network" refers to a biological neural network, i.e., a network of biological neurons. The modern meaning of this term also includes artificial neural networks, built of artificial neurons or nodes. Machine learning includes adaptive mechanisms that allow computers to learn from experience, learn by example and by analogy. Learning opportunities can improve the performance of an intelligent system over time. One of the most popular approaches to machine learning is artificial neural networks. An artificial neural network consists of several very simple and interconnected processors, called neurons, which are based on modeling biological neurons in the brain. Neurons are connected by calculated connections that pass signals from one neuron to another. Each connection has a numerical weight associated with it. Weights are the basis of long-term memory in artificial neural networks. They express strength or importance for each neuron input. An artificial neural network "learns" through repeated adjustments of these weights.
Neural networks are defined using only elementary concepts from set theory, without the usual connectionistic graphs. The typical neural diagrams are derived from these definitions. This approach provides mathematical techniques and insight to develop theory and applications of neural networks. 8 pages.
At the present time, backpropagation is the most popular learning algorithm for multilayer feedforward neural networks. It can be expressed in the formalism of vector or matrix algebra : extensions of scalar operations and matrix products. These operations can be performed with a high degree of parallelism. Three architectures involving loosely coupled processors are considered: torus, mesh and ring of processors. The performances of these architectures are assessed and their privileged applications are discussed. The experiments which were performed on a hypercube of Transputers are in agreement with the theoretical predictions. It is shown analytically and experimentally that the implementation of neural networks on a multiprocessor architecture, if performed properly, can be efficient.
2007
Neurocomputing is a comprehensive computational paradigm inspired by mechanisms of neural sciences and brain functioning that is rooted in learning instead of preprogrammed behavior. In this sense, neurocomputing becomes fundamentally different from the paradigm of programmed, instruction-based models of optimization. Artificial neural networks (neural networks, for short) exhibit some characteristics of biological neural networks in the sense the constructed networks include some components of distributed representation and processing as well as rely on various schemes of learning during their construction. The generalization capabilities of neural networks form one of their most outstanding features. The ability of the neural networks to generalize, namely, develop solutions that are meaningful beyond the scope of the learning data is commonly exploited in various applications. From the architectural standpoint, a neural network consists of a collection of simple nonlinear processing components called neurons, which are combined together via a net of adjustable numeric connections. The development of a neural network is realized through learning. This means to choose an appropriate network structure and a learning procedure to achieve the goals of the application intended. Neural networks have been successfully applied to a variety of problems in pattern recognition, signal prediction, optimization, control, and image processing. Here, we summarize the most essential architectural and development aspects of neurocomputing. Computational Model of Neurons A typical mathematical model of a single neuron (Anthony and Barlet, 1999) comes in the form of an n-input single-output nonlinear mapping (Fig. B1) described as follows: y ¼ f ð X n i¼1 w i x i Þ ð B1Þ
The objective of this work-in-progress paper is the presentation of the design of a general purpose parallel neural network simulator that can be used in a distributed computing system as well as in a cluster environment. The design of this simulator follows the object oriented approach, while the adopted parallel programming paradigm is the message passing interface.
Mathematics and Computers in Simulation, 1989
2012
Abstract Simulation of large networks of neurons is a powerful and increasingly prominent methodology for investigate brain functions and structures. Dedicated parallel hardware is a natural candidate for simulating the dynamic activity of many non-linear units communicating asynchronously. It is only scientifically useful, however, if the simulation tools can be configured and run easily and quickly.
REDD Training module on REDD plus Nepal for social inclusion FCPF DANAR 2013 _2014, 2014
Asian Englishes, 2016
مجله ایراننامه, 2020
Les réseaux : (d)écrire les liens, (dé)construire des relations, textes réunis par Hélène Frison et Marie Salgues, Publication du CREC, Collection « Les travaux du CREC en ligne », ISSN 1773-0023, n° 14, https://crec-paris3.fr/wp-content/uploads/2023/10/Redes-livre.pdf, 2023
Encyclopedia of International Higher Education Systems and Institutions
book review, 2002
International journal of Islamic and Middle Eastern finance and management, 2024
EXPERIÊNCIAS DE / COM UMA “PESSOA T” INDÍGENA ENTRE-GÊNEROS DO / NO COTIDIANO TOCANTINENSE, 2020
Revista Do Instituto Adolfo Lutz, 2013
PLOS ONE, 2021
Infectio, 2012
Journal of Dentofacial Anomalies and Orthodontics, 2010
Journal of African Earth Sciences, 2019
Journal of Hydrodynamics, 2010
British journal of neurosurgery, 2020
Bulletin of the American Physical Society, 2016