May 16, 2019

Xnor releases AI2GO: A self-serve edge AI platform for building smart on-device solutions

Dive into the latest perspectives, insights,
and updates from our global community.

Written by:

No items found.
No items found.

Listen now

SEATTLE, May 16, 2019 (GLOBE NEWSWIRE) -- Xnor.ai has launched AI2GO, a self-serve platform that enables developers, device creators and companies to build smart, edge-based solutions without training or background in AI. AI2GO is available now and contains more than a hundred fully trained models optimized to run on resource-constrained devices such as mobile devices, wearables, smart cameras, remote sensors and more. AI2GO models are being used today to build solutions for retail analytics, smart home and industrial IoT.

Before AI2GO, AI relied almost exclusively on expensive hardware running in the cloud and was restricted to a handful of companies in the world. Even with the tools that were available then, building AI products required knowledge and expertise in deep learning to design, train and implement solutions. Deploying these models at the edge required solving for a whole host of constraints, including memory, power and latency, which made development for on-device AI almost impossible.

AI2GO promises to change the scale and speed at which AI solutions can be built. As the first platform to offer hundreds of fully trained edge AI models with state-of-the-art accuracy, developers no longer need to worry about data collection, annotation, training, model architecture or performance optimization. They simply download the complete solution and are ready to go.

“Xnor’s ‘drag & drop’ approach to AI application design removes much of the pain for developers living at the intersection of hardware and software,” says Adam Benzion, Co-founder and CEO of Hackster.io.

“Xnor AI2GO is the gold standard for optimizing IoT ML,” says Gant Man, CIO of Infinite Red.

Enterprise customers using Xnor will continue to benefit from its custom trained and highest performance models. In the coming months, the AI2GO platform will provide enterprise customers access to fully optimized models along with additional custom features including automated training and re-training, and performance optimization for large-scale development teams.

The release of AI2GO is a continuation of Xnor’s mission to bring AI Everywhere to Everyone. In 2017, Xnor demonstrated it could remove the cost barrier by running deep learning on $5 hardware. In 2019, it removed the barrier of power with solar powered AI. Now, with AI2GO, Xnor is removing the barrier of AI expertise.

“By providing access to deep learning that can readily run on-device, we believe we afford all companies, regardless of team, budget or hardware, the opportunity to participate in this new era of AI innovation. AI2GO enables this vision through a platform of a large number of models running on many devices that are able to operate under numerous constraints,” says Ali Farhadi, Co-founder and CXO, Xnor.

Using AI2GO is simple. First the user selects their preferred hardware (Raspberry Pi, Linux, Ambarella, Toradex, etc.), then chooses an AI use case, for example a “pet classifier for a home security camera,” a “person detector for a dash cam,” or a “person segmenter for video conferencing applications.” Because AI2GO models are designed to run in resource-constrained environments, Xnor provides the user with the novel opportunity to tune their model for latency (milliseconds) and memory footprint (megabytes) in order to fit within the user’s set of constraints. Once the user has specified their constraints, the available models are listed, ranked by accuracy. The user can then download an Xnor Bundle (XB), a module containing a deep learning model, an inference engine. Xnor also provides an accompanying SDK that includes access to code samples, demo applications, benchmarking tools and technical documentation that makes it simple for everyone to start building a smart application.

Xnor will be hosting events to on-board new users to AI2GO in the coming months, including at the Embedded Vision Summit in California, May 20-23, in booth #419.

Media contact:
Kevin Wolf
TGPR
(650) 483-1552
kevin@tgprllc.com

Photos accompanying this announcement are available at:
http://www.globenewswire.com/NewsRoom/AttachmentNg/9121a6b5-39c2-4731-8077-5defa93c1f46
http://www.globenewswire.com/NewsRoom/AttachmentNg/2dcadc5a-7809-4db1-b21d-66b05ac1b2aa

A video accompanying this announcement is available at:
http://www.globenewswire.com/NewsRoom/AttachmentNg/3d3bf852-6369-42da-9fdd-4581cc0d7593