New architecture uses 3D memories to enable greater speed and energy-efficiency in parsing grammar for virtual assistants
This technology is an an-memory computing architecture that enables computing systems to perform graph transitive closure problems more efficiently. The technology also requires less energy and is faster at accessing and processing data than traditional von Neumann architectures. Many applications require graph transitive closure solutions in order to perform complex procedures such as resolving database queries, parsing grammars for use by virtual assistants, and determining data dependencies at compilation.
By leveraging 3D nanoscale crossbar memory circuits, the new logic-in-memory architecture avoids the memory-processor bottleneck issue that occurs when high-performance machines use traditional von Neumann-based architectures. Another problem with the traditional architectures, which use volatile DRAM and SRAM memories, is that the stored data is lost when the system power turns off. Thus, they require a constant supply of power to refresh stored data, resulting in poor energy efficiency. Furthermore, in von Neumann architecture, separation of memory and processing units poses serious restrictions on performance. According to the International Data Corporation (IDC), the world will produce 44 zettabytes or 44 trillion gigabytes of data by 2020. This growth will add to the challenge of power-efficient computing with structured big data.
• Faster and more energy-efficient than traditional von Neumann architecture
• Scalable for use with large graphs
• High-performance computing machines and computational data science
• Cybersecurity and low-energy military solutions
• ASIC (application-specific integrated circuit) implementations of graph algorithms in hardware