A New Embedded Memory

Intrinsic will deliver its first products as embedded non-volatile memories for integration into CMOS digital logic devices using standard EDA design flows. The memories will be configurable to the needs of the implementation allowing the system architect to chose a memory architecture that best exploits the benefit of being able to integrate the memory exactly where it is needed.

The application space for such a memory technology is huge. As examples, there are two important applications that illustrate the value such a memory can bring; microcontrollers and edge AI devices.


Microcontroller units (MCUs) are microprocessor systems that include embedded non-volatile memory (NVM) on the same die, primarily for the storage of the program code and configuration data. As we use MCUs to bring the benefits of automation, security, connectivity etc. to more of our products we strive for systems that are lower power, more secure, cheaper and smaller in size. Integrating the memory in the same chip as the processor is the obvious way to achieve these goals. However today’s MCUs, which do integrate NVM, are constrained to older technologies because embedded Flash NVM cannot be made at more advanced nodes. Consequently, MCUs today don’t deliver to their full potential. Intrinsic RRAM will remove this constraint and allow MCUs to benefit from both advanced process nodes and embedded non-volatile memory. This new breed of MCUs will be much lower power and more cost-effective than today’s devices.

Edge AI

Implementation of artificial intelligence at the ‘edge’ (edge AI) is the use of AI techniques in products and devices at the edge of our connectivity networks, or even stand alone. These applications, which often have severe power constraints because of their physical location, require vast amounts of local memory or a low-latency, high bandwidth connection to the cloud. For many edge AI applications the latter is out of the question, so the solution is to bring as much non-volatile memory as close as possible to the processor.

Traditional architectures with non-volatile memory will use external Flash chips, but just like microcontrollers, they would significantly benefit from being able to integrate that memory on the same device as the processor. The benefit for AI is because this distributable embedded memory opens the opportunity to redefine the architectures used for such applications. The ability to integrate the memory, which will have read access speed similar to the SRAM memory normally used, allows for new compute architectures with much higher memory bandwidth. New AI architectures using Intrinsic RRAM therefore have the opportunity to be much higher performance and much lower power than those achievable with external non-volatile memory and internal SRAM.