- AI Research Reports
Edge inference has emerged as a key workload in 2019–20, and many companies have introduced their chipsets. Several different factors are driving AI processing to the edge device. Privacy, security, cost, latency, and bandwidth are all being considered when evaluating data center versus edge processing needs. Applications like autonomous driving and navigation have sub-millisecond latency requirements that make edge processing mandatory. Other applications such as speech recognition on smart speakers generate privacy concerns. Keeping AI processing on the edge device circumvents privacy concerns while avoiding the bandwidth, latency, and cost concerns of cloud computing. Omdia forecasts that global AI edge chipset revenue will grow from $7.7bn in 2019 to $51.9bn by 2025.
This Omdia report provides a quantitative and qualitative assessment of the opportunity for AI edge processing across several consumer and enterprise device markets. The device categories include automotive, consumer and enterprise robots, drones, head-mounted displays (HMDs), mobile phones, PCs/tablets, security cameras, smart speakers, machine vision, and edge servers. Global revenue and shipment forecasts, segmented by chipset architecture, power consumption, compute capacity, training versus inference, and application attach rate for each device category, extend through 2025.