In this paper we present an analysis of the minimal hardware precision required to implement Support Vector Machine (SVM) classification within a Logarithmic Number System architecture. Support Vector Machines are fast emerging as a powerful machine-learning tool for pattern recognition, decisionmaking and classification. Logarithmic Number Systems (LNS) utilize the property of logarithmic compression for numerical operations. Within the logarithmic domain, multiplication and division can be treated simply as addition or subtraction. Hardware computation of these operations is significantly faster with reduced complexity. Leveraging the inherent properties of LNS, we are able to achieve significant savings over double-precision floating point in an implementation of a SVM classification algorithm.
Faisal M. Khan, Mark G. Arnold, William M. Potteng