top of page

[May 21, 2025] Mobilint Unveils AWS Collaboration and New Edge AI Accelerator at Embedded Vision Summit 2025

  • Writer: K-ASIC
    K-ASIC
  • May 23, 2025
  • 2 min read

Santa Clara, CA | May 21, 2025 – Mobilint, a fast-rising AI semiconductor startup and a portfolio company of the Korea AI & System IC Innovation Center (K-ASIC), announced a strategic technology collaboration with Amazon Web Services (AWS) during its participation in Embedded Vision Summit 2025, held at the Santa Clara Convention Center.


Following initial discussions at CES 2025, Mobilint and AWS have been working closely to integrate Mobilint’s high-performance NPU (Neural Processing Unit) with AWS IoT Greengrass, AWS’s edge computing platform. The two companies are now exploring joint go-to-market strategies and customer engagement initiatives aimed at expanding the adoption of on-device AI across edge applications.



At EVS 2025, Mobilint introduced the collaboration roadmap and demonstrated real-world applications of its newly launched MLA100 MXM module—drawing strong attention from industry experts, developers, and potential partners. The new integration will enable AWS customers to leverage Mobilint’s NPU for real-time inference, local processing of sensitive data, and latency-optimized AI workloads, all at the device level.


Notably, Mobilint plans to connect its NPU SDK with Amazon SageMaker, simplifying the full AI lifecycle from training to deployment, offering a seamless developer experience.

“This collaboration goes beyond a simple technical integration—it represents a strategic step toward delivering unified NPU-powered edge AI solutions to the global market,” said Dongjoo Shin, CEO of Mobilint. “We’ve seen strong interest here at EVS, and we plan to accelerate co-marketing and customer acquisition efforts alongside AWS.”

Mobilint Continues Global Momentum with EVS Appearance

Mobilint’s participation in Embedded Vision Summit 2025 builds on its recent showcases at CES (Las Vegas) and MWC (Barcelona), where it introduced on-device AI chips capable of running LLMs (Large Language Models) without server connectivity.


The centerpiece of its product line, REGULUS, is a custom NPU that delivers 5–10x lower power consumption and cost compared to GPUs, enabling efficient deployment across drones, robotics, consumer electronics, and embedded AI systems.


In April, Mobilint launched MLA100 MXM, a compact, MXM-form-factor AI accelerator built on its flagship ERIS chip. Key features include:

  • Up to 80 TOPS at just 25W of power

  • Eight-core architecture for parallel model execution

  • Support for LLMs, VLMs, and transformer-based models

  • Ultra-compact 82mm × 70mm size and 110g weight—ideal for robotics, industrial automation, and edge servers



 
 
bottom of page