With this methodology, cutting-edge performance is obtained by using only 5% of the computational resources required by a traditional neural network.

About

Challenge Deep Learning approaches have been utilized for efficient and rapid processing in big data applications. In order to better process the ever-increasing amounts of data, these networks have been growing increasingly complex to comprise more and more layers. While the depth of the networks improves their accuracy there is also a price to be paid in terms of computer processing requirements. This limits the applicability of these techniques in resource-constrained environments like mobile phones and battery operated devices. A new approach is needed to adapt deep learning techniques to function effectively yet utilize fewer computational resources. Solution Artificial Neural Networks (ANNs) are comprised of layers of nodes acting as artificial neurons. The nodes or neurons consist of nonlinear weighted sums between their inputs and their connections and a nonlinear function. As the ANNs contain numerous nodes, it has been demonstrated that many are not needed for every item in the dataset. In light of this, locality sensitive hashing is used to efficiently evaluate which nodes are required. Data is mapped to an integer value with similar data points being mapped to the same integer value for improved efficiency. Data structures or hash tables are created to determine the nodes closest to the input while ignoring those more distant. With this methodology, cutting-edge performance is obtained by using only 5% of the computational resources required by a traditional neural network.   Benefits and Features   Minimizes the computational resources needed for deep learning neural networks Faster training and testing time for improved battery life in mobile and embedded applications Sparsity allows for asynchronous and massively parallel training and near linear speedup with the number of processors Market Potential / Applications This more efficient method for deep learning neural networks allows for improved battery life via reduced computational complexity. These improvements will be readily apparent in mobile and embedded devices with limited computational resources and finite power sources. The inherent sparsity in datasets leads to an approach that is scalable with multiple processors.    

Register for free for full unlimited access to all innovation profiles on LEO

  • Discover articles from some of the world’s brightest minds, or share your thoughts and add one yourself
  • Connect with like-minded individuals and forge valuable relationships and collaboration partners
  • Innovate together, promote your expertise, or showcase your innovations