The whitepaper discusses the challenges in deploying artificial intelligence (AI) in edge platform and the necessity of having various AI accelerator at silicon. A lot real-time applications demand on-the-fly data analysis along with low power.
Moreover industry 4.0 requirements which emphasize on Smart Sense + Smart Automation + Connectivity, creates big business and technology transformation across various industry segments (example: Image Sensor Industry, Automation (with Synthetic brain), Semiconductor Industry, Communication Technology (wired/wireless) etc.). Adopting Deep Learning technology and Internet-of-Things (IoT) technology not only helps in intelligent automation but also helps industries to upscale the productivity, operating efficiency and also enables new services to customers. Also “AI at edge” with help of low-footprint and low-power device, has wide open opportunity in many segments like Smart Farming, Smart Appliances, Medical, Robo-Taxi and Drones (for Drug delivery) etc.. Many real-world problems demand a special AI accelerator core which does the high demanding math and compute intensive tasks. Today silicon devices (ex: FPGA / SOC / ASIC) with latest process technology node, able to accommodate massive computing elements, memories, math function with increased memory and interface bandwidth, with smaller-footprint and low power. So having a different AI accelerator topology will certainly have advantage like Responsive, Better security, Flexibility, Performance/Watts and Adoptable.This helps in deploying different network models addressing various application and use case scenarios, by having scalable artificial intelligence accelerator for network inference which eventually enable fast prototyping and customization during the deployment.