Start Date:7/25/2023
Start Time:9:00 AM PDT
Duration:60 minutes
Quantization is a valuable process in Deep Learning of mapping continuous values to a smaller set of discrete finite values. It is a powerful technique that can significantly reduce the memory footprint and computational requirements of deep learning models, making them more efficient and easier to deploy on resource-constrained devices.
In this talk, we will explore the different types of quantization techniques that can be applied to deep learning models. In addition, we will give an overview of the Neural Network Compression Framework (NNCF) and how it complements the OpenVINO™ Toolkit to achieve outstanding performance.
Not registered for the Beyond the Continuum: The Importance of Quantization in Deep Learning webcast and interested in signing up? Click below:
Register Now!Adrian Boguszewski
Intel AI Software Evangelist
Intel Corporation
Zhuo Wu
Intel AI Software Evangelist
Intel Corporation
By logging in, you agree to the Terms of Use and Privacy Policy.