|BIANCHI STEFANO||Cycle: XXXIII |
Tutor: SOTTOCORNOLA SPINELLI ALESSANDRO
Advisor: IELMINI DANIELE Major Research topic
:CIRCUITAL ARCHITECTURES WITH SCALABLE SYNAPSES AND NEURONS FOR BIO-INSPIRED ARTIFICIAL INTELLIGENCEAbstract:
The research activity of my doctoral studies focuses on artificial intelligence (AI), namely the ability to reproduce brain-like reasoning in a silicon chip. Computers able to learn by sensory excitement from the external world, to infer abstract concepts and to make decisions, are spurring a new technology revolution reshaping all aspects of our life and society. Every neuromorphic system has two principal components: the neuron, usually implemented in CMOS technology, and the synapse, that connects two near neurons. The synapse can be modelled as a resistive-switching device in which the resistance changes in response to the application of an electrical stimulus coming from the neuron. All the types of these "memristive devices" share the multilevel capability of changing their conductance to any arbitrary value within a possible range. In addition, memristive devices show outstanding area efficiency thanks to their 2-terminal structure, which allows a minimum device size in the range of only few square-nm and stacking capability thanks to 3D integration.
The first approach to AI has been directed to the learning algorithms related to the ability of updating the synaptic weights by STDP (Spike-Timing-Dependent Plasticity) or SRDP (Spike-Rate-Dependent Plasticity). These learning mechanisms have been verified by exploiting simulations and experimental measurements in extended networks with resistive random acess memory, RRAM, or phase change memory, PCM, devices as synapses. Great attention has also been given to the prediction, through analytical models, of the behavior of the bio-inspired networks, in order to cover a step towards a complete theory of bio-inspired computing.
STDP and SRDP are not the only neural networks architectures playing a leading role for the next computing revolution. There are others approaches, like fully connected and convolutional networks, that try to explain machine learning through recursive supervised algorithms. In particular these techniques have recently demonstrated a really high reliability in object or faces recognition. However, though their superhuman ability in tasks like pattern recognition, these techniques are not bio-inspired. Thus, they cannot reproduce human like behaviors such as continual learning, i.e the ability of continually learning concepts throughout life without forgetting previous information.
To both rely on the computational efficacy of fully connected and convolutional networks and on the bio-inspired plasticity of STDP, we have proposed a new kind of artificial neural network able to achieve continual learning. The new architecture is capable of learning and classifying new input objects without catastrophically forgetting previously learnt information. We have demonstrated the efficacy of our novel neural network by experimental demonstrations of continual learning for the MNIST and CIFAR10 datasets. The achieved results confirm the relevance of memristive devices as synapses in terms of compactness and power consumption.
A key goal of artificial intelligence is thus to design networks capable of learning from their own experience, both for adaptation and consolidation. In biology, homeostasis is what regulates the learning activity to match stability and plasticity in the brain. Introducing homeostasis to hardware neural networks would further develop stable and energy-efficient learning. For this reason, the research line has moved towards the study of a novel artificial neuron based on PCM devices capable of spike frequency regulation and power saving via homeostasis.
Homeostatic neurons are implemented by self-controlling the internal threshold of each neuron with PCM devices in order to stabilize the spike rate of the neuron. When the neuron fires it means that it has been specialized on a particular input. Further spikes mean better specialization but even more power consumption. The homeostatic regulation overcomes this problem by gradual crystallization of the PCM threshold device at each neuronal fire. The multi-resistive steps of the PCM device, achieved by a repetition of equal pulsed signals, bring about a consequent gradual increase of the internal threshold. This assures improved pattern specialization, reduction of power consumption and self-control of each neuronal activity as a function of the number of spikes. Homeostasis is shown to enable multi-pattern learning from Fashion-MNIST dataset via unsupervised asynchronous spike-timing-dependent plasticity (ASTDP).
This novel homeostatic neuron is useful not only in the framework of continual learning and ASTDP but even for reinforcement learning. Reinforcement learning is directly referred to a human-like approach to knowledge, being strictly related to experience. In particular, the homeostasis driven by PCM devices can control the internal states of the neurons in relation to the past experience of the full neural network. This is interesting when reconnected to the internal states of the so called recurrent neural networks (RNNs). In particular, inserting homeostasis in RNNs it is possible to achieve a brain-inspired RNN that is able to self-control its development towards a complete exploration of a given task. Thus, we have developed a novel homeostaic RNN that has been experimentally shown to solve complex reinforcement tasks like maze navigation. The autonomous agent relies only on PCM plasticity and homeostasis to explore the environment and become its own expert teacher via a penalty/reward scheme.
The novel homeostatic neuron appears as an essential step for solving AI tasks, thus resulting a key mechanism for adaptation, learning and autonomous navigation.
In the following, the research line will go on exploring the potentialities and capabilities of bio-inspired AI for larger systems, furtherly improving the already presented concepts and looking at biology to draw inspiration for novel learning mechanisms. The principal aims will be the demonstration on large scale of the aforementioned bio-inspired concepts. This will be pursued in two different ways, namely (i), by implementing a full-based RRAM array automatically managed by an FPGA and (ii) by designing integrated neuromorphic circuits using PCM devices on Cadence Virtuoso.