Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
485 Discussions

Accelerate AI with Intel® Architecture Powering Microsoft Azure

MaryT_Intel
Employee
0 0 290

As AI continues its march into the mainstream enterprise, IT organizations are looking for ways to simplify implementation. Intel is helping customers accelerate their path to cloud-based AI by working with cloud providers to offer optimized and intelligent services based on trusted Intel® architecture.

This week, the Microsoft Azure + AI Conference (co-located with DEVintersection) kicks off from December 3-6 in Las Vegas. Intel is a proud Diamond sponsor of the conference and is collaborating with Microsoft to deliver peak performance on a range of Intel architecture-based Azure deployments, from pre-built cognitive APIs to hardware-accelerated inferencing and flexible virtual machines.

Cognitive Computing

Cognitive computing is a technology platform that can be used to quickly integrate human-like interaction and capabilities into business processes. Intel works hard to ensure cognitive computing solutions running on Intel® technologies are performant both in the cloud and at the edge.

Microsoft offers Azure Cognitive Services to provide a simple, powerful way to deploy intelligent algorithms for natural language processing, speech recognition, object detection, and more via a simple API. The newly announced Azure Cognitive Service containers enable next-generation connected, distributed applications to operate on data wherever it resides. Intel is working closely with Microsoft to tune Azure Cognitive Services in the cloud and enhance performance as containers across a variety of Intel-based edge solutions. Over time, additional containers will expand the scope of this approach, making them even more flexible and capable in solving all kinds of business problems.

Edge Computing

Microsoft and Intel are working with Carnegie Mellon on research to help connect edge devices to the cloud. The two companies recently announced a donation of hardware and services to the university’s Living Edge laboratory, which is developing devices capable of constant learning at the network’s intelligent edge.

Inferencing Services

For many organizations, accelerating inferencing with custom silicon is an attractive prospect, but the complexity of doing so can be daunting. Azure Machine Learning with Project Brainwave cuts through that complexity, offering acceleration on Intel FPGAs without any specialized expertise or on-premise hardware. This Platform as a Service (PaaS) offering powered by Intel FPGAs offers massive performance gains for AI applications, delivering order-of-magnitude performance increases for inferencing images.

Virtual Machines

Some organizations need complete flexibility for their AI implementations, especially when they have proprietary models or complex requirements. For customers selecting Infrastructure as a Service (IaaS) deployments for their machine learning and AI solutions, Microsoft offers Azure Fv2 and Azure HC VMs powered by high-performance Intel® Xeon® Platinum 8168 processors for computationally intensive workloads on the public cloud.

Microsoft and Intel also continue to empower developers with their toolchains of choice. For example, Microsoft Visual Studio supports Intel AVX-512 instructions to accelerate AI applications. Intel has also been working to enable Intel-optimized TensorFlow* on Fv2 instances, along with optimizations for other popular deep learning frameworks such as Caffe*, MXNet*, and Theano*. The goal is to put more power in the hands of developers thanks to performance speedups and multi-node scaling on the Intel Xeon Scalable processors that underlie these purpose-built Azure VM instances.

ONNX Collaboration

Intel and Microsoft worked together to optimize the Open Neural Network Exchange* (ONNX*), an open-source approach to sharing deep learning models, including support for the Intel-developed nGraph open source library and compiler suite that lets data scientists use their preferred deep learning framework for training and inference on multiple hardware architectures. Microsoft recently announced the ONNX Runtime inference engine to rapidly adapt to new software and hardware, including nGraph and Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). Learn more at these Azure + AI Conference conference sessions:

Take the Next Step

Take advantage of the groundbreaking co-innovation by Microsoft and Intel to power the next generation of AI on the Azure platform.

Get started at ai.intel.com

About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.