Design of learning chips and silicon neurons

I am developing a new approach to the design of learning (regression/classification) ICs that solves many problems usually associated with learning ICs and allows a much higher number of neurons per chip/IC.

Currently, I am developing a low-power analog IC that abstracts 1 artificial neuron with only 1 transistor. On this topic I am collaborating with Monash University (Australia), MIT (USA), TUM (Germany), and University of Zagreb (Croatia).

Furthermore, I have designed a very small RBF activation function circuit containing only 4 transistors (and published findings in a respectable journal). It uses only 4 transistors and is therefore very suitable for cases  when a large number of non-linear functions, i.e. neurons,  is needed.

  • D. Vrtaric, V. Ceperic, and A. Baric. “Area-efficient differential Gaussian circuit for dedicated hardware implementations of Gaussian function based machine learning algorithms”. In: Neurocomputing (2013). Corresponding author.

In addition, I have designed several learning algorithms that produce very small models that require small on-chip area.

  • V. Ceperic, G. Gielen, and A. Baric. “Sparse multikernel support vector regression machines trained by active learning”. In: Expert Systems with Applications 39.12 (2012), pp. 11029—11035. Corresponding author.
  • V. Ceperic and A. Baric. “System, Method and Computer Program Product for Modelling Electronic Circuits”. Patent application 13,353,701 (US). Jan. 2012.

Finally, most neuro chips operate in time-domain; I have developed several time-domain learning algorithms e.g.

  • V. Ceperic, G. Gielen, and A. Baric. “Recurrent sparse support vector regression machines trained by active learning in the time-domain”. In: Expert Systems with Applications 39.12 (2012), pp. 10933—10942. Corresponding author.