Ambarella, a US-based semiconductor company, is often considered a hidden champion of the security industry. While its name doesn’t appear on camera casings, its chips are the silent powerhouses behind major security camera manufacturers like i-PRO, Axis, and Bosch (IQSIGHT), and even global names outside security such as Insta360 and GoPro. Cumulatively, Ambarella has shipped 400 million chips, including 42 million AI Systems-on-Chip (SoCs), making it an indispensable player in intelligent edge devices.
At ISC West, the fabless semiconductor designer revealed its latest generation of SoCs engineered for advanced, transformer-based workloads: the CV7 edge AI vision family and the N1 edge GenAI family of SoCs.
Showcased in an exclusive, invitation-only suite away from the main exhibition floor, Ambarella’s more than 20 demonstrations highlighted three key innovations poised to revolutionize security systems integration:
* AI-powered Image Signal Processing (AISP) for real-time image quality enhancement.
* Coding-free system setup driven by agentic AI.
* Concurrent AI workloads, including ISP, AI agents, and comprehensive metadata generation and analysis.
While the CV7 and N1 empower advancements across all three areas, agentic AI stands out as the most immediately impactful innovation, distinctly separating this new generation of systems from their predecessors.
Agentic AI and Scalability Unleashed
“The moment users truly grasp the progress of AI capabilities is when they witness agentic AI in action,” explained Jérôme Gigot, VP of Marketing Edge AI Products at Ambarella. “We illustrated this in a demo showcasing the setup of a warehouse monitoring system. The entire process relies on prompts—no coding required—to configure a system integrating edge AI cameras running on CV7s, on-prem AI boxes powered by N1s, and cloud analytics.”
Gigot continued, “The outcome is a context-aware system that intelligently monitors premises, provides users with concise summaries of events, and executes automatic, AI-powered responses. Setting up such a sophisticated system simply involves operators providing prompts, and the AI agent designs the entire architecture.”
Through these demonstrations, Ambarella also aimed to provide its expert audience with a comprehensive understanding of the full breadth of AI performance delivered by its new chips.
“The true test lies in running an entire system with multiple AI operations simultaneously,” Gigot emphasized. “This includes handling multiple 4K video streams, processing and encoding image data, and running various AI agents and models that leverage metadata generated at the edge. One of our key demonstrations proved that the AI inference rate remains constant at 80 to 82 frames per second, even when subjected to additional strain from 4K ISP processing, 4K encoding, and full CPU loading.”
Unparalleled Efficiency at the Edge
The demonstrations underscored one of Ambarella’s core strengths: high-performance chips designed for cutting-edge energy efficiency.
Gigot highlighted that efficiency is paramount for operating modern security systems, such as the advanced warehouse setup shown in the demo. Thanks to transformer-based models on the latest Ambarella chips, natural-language prompts can enable the analysis of past events and the precise setup of alarms for future incidents without requiring the compute-intensive re-analysis of recorded footage.
“With transformers, this process transforms into an efficient vector-match operation, relying solely on AI metadata captured when the footage was recorded,” Gigot elaborated. “Transformers boost AI performance by 10 to 20 times. Furthermore, our latest-generation chips, built on advanced 4-nanometer nodes—down from 5nm—achieve approximately a 15 percent reduction in power demand.”
While systems powered by Ambarella’s high-performance chips enable remarkable complexity, the company is committed to ensuring this doesn’t compromise agility.
“Another key area we demonstrated is cloud orchestration,” stated Amit Badlani, Director of Product Marketing for AI/ML at Ambarella. “The focus was on showcasing the effortless distribution of AI workloads across a system. Users can simply drag-and-drop to decide where each AI workload should run—on edge devices, on an on-prem AI box, or within the cloud.”
“Users can also oversee and distribute workloads using their AI box, leveraging the N1’s substantial compute power,” he added, noting that previous generation systems often required additional hardware for such operations. “Our AI agent-based workflow also seamlessly integrates with a host of open-source AI tools, such as Openclaw or Dify, making system setup even more intuitive for those familiar with these tools.”
Recognizing that many real-world deployments still rely on legacy devices lacking edge AI capabilities, Ambarella has designed its solutions with flexibility in mind.
“For such systems, flexibility is paramount,” Badlani affirmed. “Operators retain full control over how they orchestrate AI workloads. They can benefit from the latest AI advancements without the need for a costly ‘rip and replace’ overhaul. They simply move workloads to where their system possesses AI capabilities.”
“Ultimately, the goal is to enable more and more operations to run directly on the edge,” Badlani concluded. “The primary reason is to dramatically reduce latency to near zero, significantly down from the 20 to 30 milliseconds typically experienced when operations run in the cloud.”
“One persistent challenge, however, is achieving the highest accuracy in leaner edge AI models, as memory on the edge is also limited,” Badlani pointed out. “Our chips are not only exceptionally energy-efficient but also memory-efficient. This capability allows for the execution of more models, and models with higher parameter counts, directly within each camera.”
A Forward-Thinking Development Philosophy
With the development of the CV7 and N1 chips, Ambarella is strategically advancing towards highly specific AI models, aiming to stay ahead of the curve compared to other chip designers. This includes the unprecedented ability to run vision-language models directly at the edge on Ambarella chips, opening new avenues for camera manufacturers while demanding a strong focus on efficiency.
“Most AI models currently used in security cameras are generic video processing models,” Badlani explained. “We anticipate a future dominated by more specific models. A key focus area will undoubtedly be interpolating video frames with greater accuracy and efficiency. This is crucial because no edge AI, no matter how advanced, can analyze the full wealth of data if a camera is recording at, say, 30fps in 4K. Consequently, most edge devices extrapolate based on 10 to 20 randomly selected frames, which they then feed into a large-language model.”
“The challenge for Ambarella is to ensure the pristine quality of those key frames,” Badlani stated. “With our AISP, we are actively enhancing the performance of these models by providing them with cleaner, more relevant input, thereby making them multiple times more accurate.”
One of the suite’s demos impressively showcased this, revealing AISP-powered images with visibly reduced motion blur and the complete absence of artifacts that appeared in streams using traditional processing.
“By elevating the quality of image data using our AISP, we effectively enhance the quality of all downstream AI workloads,” Badlani clarified. “And when utilized in conjunction with techniques like MoE (Mixture of Experts), downstream AI models can analyze frame content more efficiently, employing a lower parameter count to achieve the same accuracy. For instance, an 8-billion-parameter model running on the smaller N1 655 chip inside an AI box can achieve the same accuracy as a 20-billion-parameter model. This holds the potential to significantly reduce the energy and memory demands of the entire system.”
The Cooper Platform: Unifying AI Development Across Generations
Ambarella’s robust partner network includes hardware, VMS, and VSaaS providers, boasting over 20 third-party AI models within its “library,” according to Badlani. For these partners, Ambarella’s Cooper Developer Platform serves as a comprehensive hardware and software solution for their edge AI systems, delivering powerful, safe, and secure compute and software capabilities while ensuring continuous development across chip generations.
“Developers who have created an application for a previous generation of chips, such as the CV5, can seamlessly adapt them for use on the CV7 or the N1,” Gigot affirmed. “All they need to do is select the desired AI and video performance for the application. They don’t even need to rewrite the drivers for the edge device’s sensors.”
Thanks to its relentless technological advancement, Ambarella holds a bullish market outlook for the coming year.
“We are entering an upgrade cycle for AI camera chips, transitioning from the CV5 family, which focused on Computer Vision and traditional CNNs (Convolutional Neural Networks), to the CV7 family. The CV7 adds robust support for advanced AI workloads, including transformers, VLMs, and LLMs, enabling sophisticated reasoning capabilities locally on devices. This has the potential to drive significant progress across the industry,” Gigot observed. “Additionally, we foresee the industry becoming increasingly vertically focused. While most manufacturers traditionally orient towards general-purpose security cameras, in the future, we expect a stronger focus on specific vertical needs, such as retail. Manufacturers are now asking themselves, ‘What kind of data and intelligence do retailers need beyond security?’ Their next generation of devices will likely seek to provide tailored answers for these vertical players.”

