keyboard_arrow_up
Fixed-Point Code Synthesis for Neural Networks

Authors

Hanane Benmaghnia1, Matthieu Martel1,2 and Yassamine Seladji3, 1University of Perpignan Via Domitia, France, 2Numalis, France, 3University of Tlemcen Aboubekr Belkaid, Algeria

Abstract

Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.

Keywords

Computer Arithmetic, Code Synthesis, Formal Methods, Linear Programming, Numerical Accuracy, Static Analysis.

Full Text  Volume 12, Number 2