I am developing a new approach to the design of learning (regression/classification) ICs that solves many problems usually associated with learning ICs and allows a much higher number of neurons per chip/IC.
Currently, I am developing a lowpower analog IC that abstracts 1 artificial neuron with only 1 transistor. On this topic I am collaborating with Monash University (Australia), MIT (USA), TUM (Germany), and University of Zagreb (Croatia).
Furthermore, I have designed a very small RBF activation function circuit containing only 4 transistors (and published findings in a respectable journal). It uses only 4 transistors and is therefore very suitable for cases when a large number of nonlinear functions, i.e. neurons, is needed.
 D. Vrtaric, V. Ceperic, and A. Baric. “Areaefficient differential Gaussian circuit for dedicated hardware implementations of Gaussian function based machine learning algorithms”. In: Neurocomputing (2013). Corresponding author.
In addition, I have designed several learning algorithms that produce very small models that require small onchip area.
 V. Ceperic, G. Gielen, and A. Baric. “Sparse multikernel support vector regression machines trained by active learning”. In: Expert Systems with Applications 39.12 (2012), pp. 11029—11035. Corresponding author.

V. Ceperic and A. Baric. “System, Method and Computer Program Product for Modelling Electronic Circuits”. Patent application 13,353,701 (US). Jan. 2012.
Finally, most neuro chips operate in timedomain; I have developed several timedomain learning algorithms e.g.
 V. Ceperic, G. Gielen, and A. Baric. “Recurrent sparse support vector regression machines trained by active learning in the timedomain”. In: Expert Systems with Applications 39.12 (2012), pp. 10933—10942. Corresponding author.