Arm has introduced the Arm®v9 architecture in response to the global demand for ubiquitous specialized processing with increasingly capable security and artificial intelligence (AI). Armv9 is the first new Arm architecture in a decade, building on the success of Armv8 which today drives the best performance-per-watt everywhere computing happens.
The number of Arm-based chips shipped continues to accelerate, with more than 100 billion devices shipped over the last five years. At the current rate, 100 percent of the world’s shared data will soon be processed on Arm; either at the endpoint, in the data networks or the cloud. Such pervasiveness conveys a responsibility on Arm to deliver more security and performance, along with other new features in Armv9. The new capabilities in Armv9 will accelerate the move from general-purpose to more specialized compute across every application as AI, the Internet of Things (IoT) and 5G gain momentum globally.
To address the greatest technology challenge today – securing the world’s data – the Armv9 roadmap introduces the Arm Confidential Compute Architecture (CCA). Confidential computing shields portions of code and data from access or modification while in-use, even from privileged software, by performing computation in a hardware-based secure environment.
The Arm CCA will introduce the concept of dynamically created Realms, useable by all applications, in a region that is separate from both the secure and non-secure worlds. For example, in business applications, Realms can protect commercially sensitive data and code from the rest of the system while it is in-use, at rest, and in transit. In a recent Pulse survey of enterprise executives, of enterprise executives, more than 90% of the respondents believe that if Confidential Computing were available, the cost of security could come down enabling them to dramatically increase their investment in engineering innovation.
The ubiquity and range of AI workloads demands more diverse and specialized solutions. For example, it is estimated there will be more than eight billion AI-enabled voice-assisted devices in use by the mid-2020si, and 90 percent or more of on-device applications will contain AI elements along with AI-based interfaces like vision or voiceii.
To address this need, Arm partnered with Fujitsu to create the Scalable Vector Extension (SVE) technology, which is at the heart of Fugaku, the world’s fastest supercomputer. Building on that work, Arm has developed SVE2 for Armv9 to enable enhanced machine learning (ML) and digital signal processing (DSP) capabilities across a wider range of applications.
SVE2 enhances the processing ability of 5G systems, virtual and augmented reality, and ML workloads running locally on CPUs, such as image processing and smart home applications. Over the next few years, Arm will further extend the AI capabilities of its technology with substantial enhancements in matrix multiplication within the CPU, in addition to ongoing AI innovations in its Mali™ GPUs and Ethos™ NPUs.
However, as the industry moves from general-purpose computing towards ubiquitous specialized processing, annual double-digit CPU performance gains are not enough. Along with enhancing specialized processing, Arm’s Total Compute design methodology will accelerate overall compute performance through focused system-level hardware and software optimizations and increases in use-case performance.
By applying Total Compute design principles across its entire IP portfolio of automotive, client, infrastructure and IoT solutions, Armv9 system-level technologies will span the entire IP solution, as well as improving individual IP. Additionally, Arm is developing several technologies to increase frequency, bandwidth, and cache size, and reduce memory latency to maximize the performance of Armv9-based CPUs.
Learn more about Arm’s vision for the next decade of computing here. The site features several in-depth, on-demand videos.