Intel

AI Developer Webinar Series

Sharpen your AI skills on the latest in frameworks, optimization tools, and products with webinars by AI experts from Intel. Check out our most popular topics from the AI Webinar series below by selecting one or more.

Webinars Now On-Demand

Did you miss a live webinar? Not to worry. All of the webinars have been recorded and are available to watch at your convenience. Check the box(es) to the webinar(s) you would like to view, enter your info to sign up if you have not registered already, and you will be mailed a link to view.

Select your webinar(s):

nGraph: Unlocking Next-generation Performance with Deep Learning Compilers

Read more +

The rapid growth of deep learning in large-scale real-world applications has led to a rapid increase in demand for high-performance training and inference solutions. This demand is reflected in increased investment in deep learning performance by hardware manufacturers, and includes a proliferation of new application-specific accelerators.

But performance isn’t driven by hardware alone. In the software realm, a new class of deep learning compilers has emerged, which brings to bear both classic and novel compiler techniques in order to maximize the performance of deep learning systems. Recently developed deep learning compilers include NNVM/TVM from the University of Washington, Glow from Facebook, XLA from Google, and nGraph from Intel. These deep learning compilers unlock a wealth of optimizations that encompass the whole data-flow graph. This approach achieves substantial speedups over the approach favored by existing frameworks, in which an interpreter orchestrates the invocation of per-op compute kernels that must be optimized specifically for the framework and hardware target. This webinar will offer a comprehensive overview of Intel’s nGraph deep learning compiler.

Adam Procter

Adam Procter is a deep learning software engineer in the Artificial Intelligence Products Group at Intel, where he works on the core design of the Intel nGraph deep learning compiler. He holds a PhD in computer science from the University of Missouri, where his research focused on programming language semantics, high-assurance computing, and techniques for compiling functional programming languages to reconfigurable hardware.

Introduction to the Intel® Distribution of OpenVINO™ Toolkit and WinML*

Read more +

Powered by oneAPI

In this webinar you will learn how real-time inference on the PC for visual workloads such as object detection, recognition, and tracking are now easily developed with Intel® Distribution of the OpenVINO™ toolkit and Windows Machine Learning API.

Rudy Cazabon - Bachelor’s degree in Space Science (minor in Mechanical Engineering) from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph

Deep-dive and Use-cases with the Intel® Distribution of OpenVINO toolkit

Read more +

powered by oneAPI

A previous webinar introduced the inference engine to the community of developers as the component to use to develop realtime applications. This webinar will perform a deep-dive on the capabilities of the inference engine and the API that enables creation and deployment of said applications. During this webinar a selection of use-cases will be reviewed in the context of the model/topologies used and the various hardware targets they employed.
You will learn:

  • A review of the OpenVINO toolkit
  • A deep-dive of the inference engine API via a code walkthrough
  • A survey of use-cases and applications developed for various hardware targets

Rudy Cazabon, AI Developer Evangelist at Intel

Bachelor’s degree in Space Science (minor in Mechanical Engineering) from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph.

The PlaidML Tensor Compiler

Read more +

A naive implementation of a convolution can be written in 18 lines of Python, while kernel libraries typically devote tens of thousands of lines of code to implementing optimized variants of convolution, often in architecture specific ways or even assembly in the case of cuDNN*. In cases like this where specialized code is needed to achieve performance goals, massive engineering resources are required, maintenance and development costs are high, and lock-in is common. Compilers were originally created to automate construction of general purpose machine code. Nowadays, machine learning is increasingly using compilation, reducing engineering constraints and enabling automated special-case performance upgrades for workloads too rare or novel to see human optimization. PlaidML is a tensor compiler that can be used as a component in existing ML stacks to boost performance and to enable performance portability.

You Will Learn:

  • How to use PlaidML in an existing TensorFlow* program through demonstration
  • About the PlaidML internal architecture and its role in the broader ML ecosystem
  • The technical details we will be discussing further in “How PlaidML Compiles Machine Learning Kernels,” our upcoming webinar

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.

How PlaidML Compiles Machine Learning Kernels

Read more +

As discussed in the “PlaidML Tensor Compiler” introductory webinar, you can use PlaidML to replace kernel libraries with PlaidML’s extensible and high performance compiler. PlaidML’s philosophy is that optimal kernels can be automatically produced from hardware descriptions if the constraints inherent to ML problems are appropriately represented. PlaidML utilizes a Nested Polyhedral Model to represent operations at a granularity suited for the loop restructuring optimizations needed by machine learning workloads. This webinar will show you how PlaidML transforms high level semantic descriptions of ML operations into optimized ML kernels.

You Will Learn:

  • How PlaidML compiles ML kernels
  • How PlaidML enables the development and deployment of novel operations and optimization techniques

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking in his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.

What are FPGAs and How Do I Use Them?

Read more +

This introductory course is a high-level overview of what a field programmable gate array (FPGA) is, why FPGAs are important as inference accelerators, and how easily they can be adopted into compute clusters.

Learn how heterogeneous parallel computing is used to solve the complex problems. Discover how FPGAs are used for efficient compute offload to overcome the limitations of scaling systems. Walk through the different programming models that exist for FPGAs. See how the Acceleration Stack for Intel® Xeon® CPU with FPGAs can be deployed transparently into data centers and cloud systems to take advantage of FPGA based acceleration.

You Will Learn:

  • About FPGAs and what programming models exist for FPGAs
  • How FPGAs are used for efficient compute offload
  • How the Acceleration Stack for Intel® Xeon® CPU with FPGAs is used in data center and cloud systems

Bill Jenkins

Bill Jenkins serves as a principal application engineer inside the Programmable Solutions Group at Intel Corporation. He focuses on the acceleration of a variety of workloads in data center, cloud and edge applications using field programmable gate arrays (FPGAs). He has also been one of the key driving forces behind adoption of higher level programming models for FPGAs to enable software developers and scientists to target the FPGA.

Using FPGAs for Datacenter Acceleration

Read more +

powered by oneAPI

Learn how to deploy deep learning inference tasks on FPGAs using the Intel® Distribution of OpenVINO™ toolkit and more.

Field-programmable gate arrays or FPGAs are user-customizable integrated circuits that can be deployed in datacenters to accelerate algorithms. In this webinar, explore Intel solutions to accelerate various workloads. Discover how deep learning inference tasks can be deployed on the FPGAs using the Intel® Distribution of OpenVINO™ toolkit and the Intel® FPGA Deep Learning Acceleration Suite. See how to use the Acceleration Stack for Intel® Xeon® CPU with FPGA to develop and deploy workload optimizations on the Intel® Programmable Acceleration Cards. Examine ways to develop custom Accelerator Functional Units for the FPGA.

You Will Learn:

  • How to leverage the solutions provided by Intel to accelerate various workloads
  • How deep learning inference tasks can be deployed on the FPGAs using the Intel Distribution of OpenVINO toolkit and the Intel FPGA Deep Learning Acceleration Suite
  • Ways to develop custom Accelerator Functional Units for the FPGA

Steven Elzinga

Steven Elzinga is an Application Engineer in the Customer Training group at Intel Programmable Solutions Group focusing on deep learning acceleration techniques for the FPGA. His FPGA experience also includes embedded systems and real time video processing as an IP and system developer. Steven holds a bachelor’s degree in electrical engineering from the University of Utah and a master’s degree in electrical engineering from the University of Colorado.

Accelerating Deep Learning Workloads in the Cloud and Data Centers

Read more +

powered by oneAPI

Get started using Intel optimized AI environments for your workloads in the cloud or in your data center.
Collaboration between Intel, major cloud service providers and hardware OEM partners resulted in the availability of pre-configured Intel optimized DL environments. This webinar will get you started using Intel optimized AI environments for your workloads, be it in the cloud or in your data center.

You Will Learn:

  • Intel’s AI software and hardware strategy
  • How to launch pre-configured virtual machines with Intel optimized Deep Learning Frameworks in the cloud (AWS, Azure and GCP)
  • How to run TensorFlow* CNN benchmarks with pre-configured Intel environments
  • Technical details of the Intel optimized solutions to accelerate deployment to the Data Center via OEM partners (Dell, Lenovo, HPE, Inspur)

Ravi Panchumarthy

Ravi Panchumarthy, PhD, is a Machine Learning Engineer at Intel's Artificial Intelligence Products Group. He collaborates with Intel's customers and partners to build and optimize AI solutions. He also works with cloud service providers adopting Intel's AI optimizations for cloud instances and services. Ravi has a PhD in Computer Science & Engineering from University of South Florida. His dissertation focused on developing novel non-Boolean computing techniques for computer vision applications using nanomagnetic field-based computing. He holds two patents and several peer-reviewed publications in journals and conferences.

Introduction to Reinforcement Learning Coach

Read more +

Reinforcement Learning Coach (RL Coach) is a comprehensive framework that enables reinforcement learning (RL) agent development, training, and evaluation.

Join us for a webinar introducing Reinforcement Learning Coach. RL Coach is a comprehensive framework that enables reinforcement learning (RL) agent development, training, and evaluation. Learn the basics of what Reinforcement Learning is, what exactly is RL Coach, and how you can get started with using RL Coach.

Michael Zephyr is an AI Developer Evangelist within the Intel Architecture, Graphics and Software Group at Intel. He works on promoting various Intel technologies that pertain to machine learning and artificial intelligence and regularly speaks at universities and conferences to help spread knowledge of AI. Michael holds a bachelor's degree in Computer Science from Oregon State University and a master's degree in Computer Science from the Georgia Institute of Technology. He can often be found playing board games or video games and lounging with his wife and cat in his free time.

Non-visual AI inference on the Edge

Read more +

AI inference on the PC allows for trained models to be deployed to Intel® Xeon Scalable Processors, Processor Graphics and the Intel® Myriad VPU. In previous webinars, we have demonstrated how the Intel® Distribution of OpenVINO Toolkit can be used to ingest Image Recognition and Classification models from various frameworks, e.g. Tensorflow*, PyTorch*, and deploy them on these hardware platforms in a performant manner. In this webinar, we will focus on new capabilities of the OpenVINO Toolkit to deploy Speech Recognition models using the Kaldi Framework to CPU, Processor Graphics and the VPU. You will walk away learning how to convert a Speech Recognition model to Intermediate Representation (IR) using the Model Optimizer and infer on multiple hardware using the Inference Engine Plugins.

Learning Objectives:

  • Understand the requirements to create IR for Speech Recognition Models – A Kaldi trained model (Ex: .nnet1 or nnet2 model), Kaldi Class Counts File, etc.
  • See how to use the Model Optimizer to create the IR files (.xml and .bin)
  • Use the Inference Engine to deploy to CPU, GPU and VPU.

Rudy Cazabon

Rudy holds a Bachelor’s degree in Space Science with minor in Mechanical Engineering from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph.


Webinar Series Moderator

Meghana Rao

Artificial Intelligence Developer Evangelist at Intel

Bio: Meghana Rao is an Artificial Intelligence Developer Evangelist at Intel. In her role, she works closely with universities and developers in evangelizing Intel’s AI portfolio and solutions, helping them understand Machine Learning and Deep Learning concepts, building models and POCs using Intel optimized frameworks and libraries like Caffe*, Tensorflow* and Intel®️ Distribution of Python*. She has a Bachelor’s degree in Computer Science and Engineering and a Master’s degree in Engineering and Technology Management with past experience in embedded software development, Windows* app development and UX design methodologies.

Enter your info to sign up

(*)All fields are required

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please select a country.
Please select

By submitting this form, you are confirming you are age 18 years or older and you agree to share your personal data with Intel for this business request.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.

You will receive an email confirmation to attend your selected webinar(s).