Top Ad unit 728 × 90

Android’s Neural Networks API now supports hardware-accelerated inferencing with Facebook’s PyTorch Framework

Machine Learning has shaped our present in many ways that we don’t even notice it anymore. Tasks that previously were either impossible have now become trivial to execute, making the technology and its benefits even more widely accessible to the population at large. A lot of this is made possible through on-device machine learning and Google’s Neural Networks API (NNAPI). Now, even more users will be able to experience accelerated neural networks and their benefits as the Android team has announced support for a prototype feature that enables developers to use hardware-accelerated inference with Facebook’s PyTorch Framework.

On-device machine learning allows machine learning models to run locally on the device without needing to transmit data to a server, allowing for lower latency, improved privacy, and improved connectivity. The Android Neural Networks API (NNAPI) is designed for running computationally intensive operations for machine learning on Android devices. NNAPI provides a single set of APIs to benefit from available hardware accelerators including GPUs, DSPs, and NPUs.

NNAPI can be accessed directly via an Android C API, or via higher-level frameworks such as TensorFlow Lite. And as per today’s announcement, PyTorch Mobile has announced a new prototype feature supporting NNAPI, consequently enabling developers to use hardware-accelerated inference with the PyTorch framework. This initial release includes support for well-known linear convolutional and multilayer perceptron models on Android 10 and above. Performance testing using the MobileNetV2 model shows up to a 10x speedup compared to a single-threaded CPU. As part of the development towards a full stable release, future updates will include support for additional operators and model architectures including Mask R-CNN, a popular object detection and instance segmentation model.

Perhaps the most well-known software built on top of PyTorch is Tesla’s Autopilot software. While today’s announcement doesn’t spell any direct news for Autopilot, it does open up the benefits of accelerated neural networks to millions of Android users that use software that is built on top of PyTorch.

The post Android’s Neural Networks API now supports hardware-accelerated inferencing with Facebook’s PyTorch Framework appeared first on xda-developers.



from xda-developers https://ift.tt/3nggGCs
via IFTTT
Android’s Neural Networks API now supports hardware-accelerated inferencing with Facebook’s PyTorch Framework Reviewed by site on 10:33 AM Rating: 5

No comments:

All Rights Reserved by developers forum © 2014 - 2015
Powered By Blogger, Designed by Sweetheme

Formulaire de contact

Name

Email *

Message *

Powered by Blogger.