Accelerating Intelligence-at-the-edge for Embedded & IoT Applications
Internet of Things
50-Minute Technical Session
Audience Level: Intermediate
Existing neural network systems in artificial intelligence (AI) applications are largely cloud-based for both training and inference. However, long latencies, privacy and security regulations, and high IT costs are leading rich embedded and IoT application designers to put AI capabilities on the device. Doing so requires efficient compute processing with high throughput, low-latency interfaces to sensors and machine-learning abilities for advanced AI. This talk explores the requirements and provides an overview of solutions currently available from Arm. Attendees will learn how to use general-purpose Cortex-A processors for accelerating machine learning algorithms for emerging embedded and IoT applications.
An appreciation of the emerging trends in AI applications, the limitations of current solutions and why it is necessary to have inference taking place closer to the edge.
An understanding of the key requirements for intelligence-at-the-edge to work.
Technical overview of the solutions from Arm, across Cortex-A and Cortex-M portfolios, for intelligence-at-the-edge.