Structure and Functions of a Replicative Neuro-like Module
Keywords:
Evolutionary Modeling, Declarative Programming of Neural Networks, Chinese Room, Replicative Neural-like Module, Model of Neocortex ColumnsAbstract
The given work describes a technology of construction of neural network system of artificial intellect (AI) at a junction of declarative programming and machine training on the basis of modelling of cortical columns. Evolutionary mechanisms, using available material and relatively simple phenomena, have created complex intelligent systems. From this, the authors conclude that AI should also be based on simple but scalable and biofeasible algorithms, in which the stochastic dynamics of cortical neural modules allow to find solutions to of complex problems quickly and efficiently.. Purpose: Algorithmic formalization at the level of replicative neural network complexes - neocortex columns of the brain. Methods: The basic AI module is presented as a specialization and formalization of the concept "Chinese room" introduced by John Earle. The results of experiments on forecasting binary sequences are presented. The computer simulation experiments have shown high efficiency in implementing the proposed algorithms. At the same time, instead of using for each task a carefully selected and adapted separate method with partially equivalent restatement of tasks, the standard unified approach and unified algorithm parameters were used. It is concluded that the results of the experiments show the possibility of effective applied solutions based on the proposed technology. Practical value: the presented technology allows creating self-learning and planning systems.
References
2. Shi Y., Cao J. Finite-time synchronization of memristive Cohen–Grossberg neural networks with time delays. Neurocomputing. 2020. vol. 377. pp. 159–167.
3. Stepanyan I.V., Savel'ev A.V. [Managing the chaotic properties of a neuron] Nejro-komp'yutery: razrabotka, primenenie – Neurocomputers: development and applica-tion. 2016. vol. 6. pp. 27–29. (In Russ.).
4. Stepanyan I.V., Homich A.V., Karpishuk A.V. [The principle of modularity in evolu-tionary optimization of neural network structures] Nejrokomp'yutery: razrabotka i primenenie – Neurocomputers: development and application. 2006. vol. 3. pp. 17–25. (In Russ.).
5. Ruizheng J. et al. A Collective Intelligence Based Differential Evolution Algorithm for Optimizing the Structure and Parameters of a Neural Network. IEEE Access. 2020. vol. 8. pp. 69601–69614.
6. Beniaguev D., Idan S., London M. Single Cortical Neurons as Deep Artificial Neural Networks. bioRxiv. 2020. pp. 613141.
7. Engelken R., Fred W., Abbott L.F. Lyapunov spectra of chaotic recurrent neural net-works. 2020. arXiv preprint arXiv:2006.02427.
8. Liu Q, Ulloa A, Horwitz B. Using a Large-scale Neural Model of Cortical Object Processing to Investigate the Neural Substrate for Managing Multiple Items in Short-term Memory. J Cogn Neurosci. 2017. vol. 29. no. 11. pp. 1860–1876.
9. Cabessa J, Villa A.E.P. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks. PLoS ONE. 2014. vol. 9. no. 4. pp. e94204.
10. Cao J., Cui H., Shi H., Jiao L. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce. PLoS ONE. 2016. vol. 11(6). pp. e0157551.
11. Habenschuss S., Jonke Z., Maass W. Stochastic Computations in Cortical Microcircuit Models. PLoS Comput Biol. 2013. vol. 9(11). pp. e1003311.
12. Fry D.B. The development of the phonological system in the normal and deaf child. The genesis of language. 1966. pp. 187–206.
13. Lucas A. et al. Neural Networks for Modeling Neural Spiking in S1 Cortex. Front. Syst. Neurosci. 2019. vol. 13. pp. 13.
14. Teka W.W., Upadhyay R.K., Mondal A. Spiking and bursting patterns of fractional-order Izhikevich model. Communications in Nonlinear Science and Numerical Simu-lation. 2018. vol. 56. pp. 161–176.
15. Karam E. et al. Izhikevich Neuron Spike Model in LabVIEW. 2017 ASEE Northeast Section Conference. 2017.
16. Chambers J.D. et al. Computational Neural Modeling of Auditory Cortical Receptive Fields. Front Comput Neurosci. 2019. vol. 13. pp. 28.
17. Mycielski J., Swierczkowski S. A model of the neocortex. Advances in Applied Math-ematics. 1988. vol. 9. no. 4. pp. 465–480.
18. Ahmad S., Hawkins J. How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites. 2016. arXiv preprint arXiv:1601.00720.
19. Schnepel P.et al. Physiology and Impact of Horizontal Connections in Rat Neocortex. Cerebral Cortex. 2015. vol. 25. no. 10. pp. 3818–3835.
20. Vaz A.I.F., Vicente L.N. A particle swarm pattern search method for bound con-strained global optimization. Journal of Global Optimization. 2007. vol. 39. no. 2. pp. 197–219.
21. Momma M., Bennett K.P. A pattern search method for model selection of support vector regression. Proceedings of the 2002 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics. 2002. pp. 261–274.
22. Audet C., Dennis Jr.J.E. A pattern search filter method for nonlinear programming without derivatives. SIAM Journal on Optimization. 2004. vol. 14. no. 4. pp. 980–1010.
23. Zhao Y., Zhou C.C. System and method for knowledge pattern search from networked agents. US Patent 8903756. 2014.
24. Torczon V. On the convergence of pattern search algorithms. SIAM Journal on opti-mization. 1997. vol. 7. no. 1. pp. 1–25.
25. Kamotsky D., Vargas M. System and method for performing a pattern matching search. US Patent 10565188. 2020.
26. da Silva L. E.B., Elnabarawy I., Wunsch D.C. A Survey of Adaptive Resonance Theo-ry Neural Network Models for Engineering Applications. Neural Networks. 2019. vol. 120. pp. 167–203.
27. Hecht-Nielsen R. Counterpropagation networks. Applied Optics. 1987. vol. 26. no. 23. pp. 4979–4984.
28. Pisano N.A. Searle’s Chinese Room Reconsidered. Rerum Causae. 2019. vol. 10(1).
29. Chen Y. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks. Front Comput Neurosci. 2017. vol. 11. pp. 20.
30. Çağatay H. et al. A Fair Version of the Chinese Room. Problemos. 2019. vol. 96. pp. 121–133.
31. Kussul E.; Baidyk T. Improved method of handwritten digit recognition tested on MNIST database. Image and Vision Computing. 2004. vol. 22. no. 12. pp. 971–981.
32. LeCun Y., Bottou L., Bengio Y., Haffner P. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE. 1998. vol. 86. no. 11. pp. 22782324.
Published
How to Cite
Section
Copyright (c) Иван Викторович Степанян, Андрей Владимирович Хомич
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms: Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).