Thursday, February 26, 2015

Beginning with FPGA Implementation of SVM Algorithm

SVM algorithm is perhaps most widely used classification algorithm due to its ability to handle high dimensional feature space. As the portable systems are getting smarter and computational efficient, there is a growing demand to use efficient machine learning algorithms. Thus we need to use low power and high speed embedded products for proper execution of large computations in real time. Since most of the micro controller cores cannot handle such high computations in real time due to its sequential execution of instructions and low frequency. Here FPGAs come to rescue that can be used as standalone systems without need to use general purpose processors and its ability to handle large computations by parallel execution of data. 

    Basic overview of SVM



Support vector machines are widely used binary classifiers known for its ability to handle high dimensional data that classifies data by separating classes with a hyper-plane by maximizing the margin between them. The data points that are closest to hyper-plane are known as support vectors. Thus the selected decision boundary will be the one that minimizes the generalization error (by maximizing the margin between classes).
For the linearly separable case, it does so by minimizing the following objective function:


Thus the optimal solution is given by-
                                                          

A new test example x is classified by the following function:



Here is the Lagrange multiplier.
For the non-linear input space, we use kernel functions.
Some of the widely used kernels are:



 







For more information related to SVM, just follow this link:


FPGA implementation of Support Vector Machines

 

In most of the applications, input data is first applied to a feature extraction unit that extracts relevant information from input data. This data is then applied to SVM to train itself. This training phase is a onetime process, thus this phase is often executed using standard software packages. This training finds support vectors in input data and gives its weights and bias coefficient. This information can be stored in FPGA’s memory block.

Whenever a new data sample comes, these support vectors are loaded from memory and performs the following operation:

Any of the above described kernel function can be used.

Here a large number of multiplication functions are executed, thus we need to exploit parallel architecture of FPGA to make it suitable for real time applications by using a number of MAC units that performs its execution in parallel.

Another issue in FPGA realization is synchronization where next input should come only after the execution of SVM testing on prior input. Thus we need to use proper synchronization mechanism to ensure sufficient execution of SVM by using a number of buffers to store inputs temporarily. If numbers of support vectors are large, this problem would be even worse to handle. By proper static timing analysis, its maximum clock frequency can be determined. Most of the FPGAs provide on chip PLLs that can be used to generate higher clock frequencies above the oscillator’s frequency.

Realization of parallel architecture would also increase device’s power consumption. Thus we need to care about speed-power trade off.

The most important thing in designing any FPGA based logic is to design proper interfacing circuits to handle real world data. Most of the FPGA boards provide basic interfacing with some standard input data types but we have to use proper buffering and pre-processing on input data to make it suitable for SVM.

No comments:

Post a Comment