how parallel architecture is used for accelerating BL-SOM using dedicated hardware

What is Batch Learning self organizing mapping (BL-SOM)

how parallel architecture is used for accelerating BL-SOM using dedicated hardware. This is a review article and all rights are reserved by Ryota Miyauchi, Akira Kojima, Hideyuki Kawabata and Tetsuo Hironaka Graduate School of Information Sciences, Hiroshima City University. Lets just first learn about what is BL-SOM? SOM is an unsupervised method used for data analysis. In this method larger number of vectors of high dimension input vectors are converted into 2- dimension plane that represents the relationship between the input vectors. This learning process is independent of order of input vectors.

The problem with SOM is that its learned results changes when the order of the input vectors changes. The reason is it actually extracts the degree o similarity between each input vector.

What is BL-SOM procedure?

BL-SOM has two layer structure:

  • an input layer
  • an output layer

Three steps are involved in BL-SOM procedure

  • Competitive Process
  • Cooperation Process
  • Adaptation Process

Competitive Process 

This step is also called initialization of weight vectors. In this step, distance between all the input vectors and the reference vectors

of all nodes present on map is calculated. After calculating  distance,  the node which has smallest distance between reference letter and each input is determined. That is why it is called competitive process.

 

how parallel architecture is used for accelerating BL-SOM using dedicated hardware

Cooperation Process 

After completing the competitive process, we determine the distance between all nodes on the map and the best matching nodes that were determined during competition process. Euclidean distance formula could be used for calculating the distance. But since it involves multiplier which increases the size of hardware. That is why Manhattan distance formula is used. This formula first subtract the reference vectors and input vectors, then finds their absolute value and finally adds the resulting values. It is noted that the distance calculation is independent for each node so the Manhattan distance calculation can be parallelized. Also computation of dimension can also be parallelized. The amount of learning is distributed to all best matching nodes on the map.

Neighborhood function is used so that learning nodes that are allocated to the nodes closer to the best matching nodes.

Adaptation Process

Based on the learning process, the reference letters of all the nodes are updated.

All these three processes are repeated number of times during learning. It can be observed that there is no transfer of data from competition process to cooperation process so both these methods can run in parallel and their computation process can be pipelined.

BL-SOM Accelerator 

Here introduction to BL-SOM accelerator is given which consists of  a controller, an input vector memory, individual units for all three stages.

Also read here:

https://ieeexplore.ieee.org/document/8793430

 

https://ieeexplore.ieee.org/document/8793430

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *