Hello, and welcome to this training presentation on the Intel® Distribution of OpenVINO™ toolkit. This training video is Part 1 of a two part series, where you'll learn all about the Intel Distribution of OpenVINO toolkit. In this session, we'll give an overview of the Intel Distribution of OpenVINO toolkit and it's key benefits. In the second session, we'll take a deeper technical dive into the toolkit features and components. Let's get started. The era of machines that see and understand is upon us, with deep learning software revenue estimated to grow from three billion dollars in 2017 to $67.2 billion by 2025. Whether it's industrial equipment with cameras for quality assurance and inspection, logistics and asset management or positioning, guidance and measurement. A smart camera at the edge and a security and surveillance scenario performing intrusion detection, crowd monitoring or person and object tracking. Autonomous devices and vehicles doing obstacle, pedestrian and vehicle detection, collision avoidance and scene analytics. Or an automotive vehicle using pedestrian and vehicle detection, collision avoidance or scene analytics. These modern machines are enabled by advanced computer vision and deep learning capabilities to visually understand and act on their environments. There are a multitude of valuable use cases and opportunities for video envision systems, from smart parking and advanced driver assistance systems and transportation to loss prevention and retail to machine vision robotics and factories. The surging use of new high definition and ultra high definition video cameras for security and surveillance is creating unprecedented volumes of video data. This surge is creating demand for technologies designed to effectively manage and store the data as well as integrated with other systems and databases. What's required is distributed end-to-end intelligence that addresses gaps and bottlenecks with consequential solutions. Additionally, these solutions must provide meaningful insights while offloading compute and image processing from overburdened systems. Intel is accelerating computer vision and deep learning solutions from camera to Cloud and streamlining analytics processing and deployment across multiple use cases. With the company's CPU, CPU with integrated graphics, VPU and versatile FPGA architectures plus its investment in a range of discreet accelerator plugins, Intel offers technical ingredients to deliver high performance, power and cost efficiencies, do designs for cameras, gateways, network video recorders or NVRs and servers. Additionally, to meet the evolving needs of software developers, data scientists, OEMs, ISVs and system integrators, Intel offers the Intel Distribution of OpenVINO toolkit. With the toolkit, developers can create high performance computer vision applications and easily incorporate industry standard frameworks and training models to deploy deep learning solutions that run fast and seamlessly across Intel's silicon architectures. Intel believes that deep learning is often more robust than conventional computer vision methods. Deep learning can extract meaningful information from available data. For example, when processing images or videos, deep learning can detect objects and tag them, even in ultra high definition images with millions of pixels. In fact, deep learning is now able to meet or exceed human level capabilities especially in recognition tasks where high accuracy is important. Developers will need to assess whether the increased computation and robustness of deep learning is required over traditional computer vision methods for each particular use case. Regardless of your needs, Intel's powerful vision portfolio offers both traditional computer vision and deep learning capabilities for every use case. In order to realize the vision of driving the next generation of AI and deep learning, Intel recognizes the need to support developers at each phase of the deep learning development cycle. Developers often need to start by acquiring and importing data which they use to build deep learning models. Training those models for performance allows developers to deploy them in applications where optimization is critical. Unfortunately, today's data scientists are forced to spend unacceptable amounts of time preprocessing data, iterating on models and parameters, waiting for training to converge due to compute constraints, and experimenting with different deployment models. These inefficiencies prolong development and slow down innovation. With the Intel Distribution of OpenVINO toolkit, developers and data scientists can accelerate every stage of the deep learning development cycle and easily deploy deep learning from Intel® Edge to Cloud. The Intel Distribution of OpenVINO toolkit is designed to increase performance and reduce development time of computer vision solutions. It simplifies access to these benefits with a rich set of hardware options available from Intel, enabling solutions with specific design constraints, reducing power, and maximizing hardware utilization. This lets you do more with less, opening entirely new design possibilities. The Intel Distribution of OpenVINO toolkit includes the Intel® Deep Learning Deployment Toolkit. A cross platform tool that accelerates deep learning inference performance. It includes; the model optimizer which converts cafe, TensorFlow*, MXNet*, CAL D and ONNX frameworks to intermediate representation or IR files. The inference engine with plugins for Intel® CPUs, Intel CPUs with integrated graphics, Intel® GNAs, Intel® Arria® 10 FPGAs, and Intel® Movidius™ Myriad™ Vision Processing Unit (VPU) that optimizes inference with multiplatform support. The Intel Distribution of OpenVINO toolkit supports traditional computer vision libraries, including OpenCV and OpenVX*, as well as a wide range of code samples. Also included are tools and libraries that increase CPU and Intel® processor graphics performance and enable Intel® FPGA optimization with complete support for Intel® architecture-based platforms. Finally, the Intel Distribution of OpenVINO toolkit offers support for the following operating systems. CentOS*, Ubuntu*, Microsoft Windows® 10, Linux Yocto* Project and Jethro version 2.0.3. The Intel Distribution of OpenVINO toolkit can be downloaded for free from software.intel.com/openvino-toolkit. For users building on Intel CPUs or Intel® integrated graphics, the open source version, the OpenVINO™ toolkit is available for download on 01.org/openvinotoolkit. For more information on the key differences between the Intel Distribution of OpenVINO toolkit and the OpenVINO toolkit, please read the FAQ at 01.org/openvinotoolkit/FAQ. The Intel Distribution of OpenVINO toolkit maximizes the power of Intel CPUs, Intel processor graphics, FPGAs and VPUs, helping developers accelerate performance with computer vision accelerators from Intel, enhancing code performance, and enabling heterogeneous processing and asynchronous execution across multiple types of Intel processors. Integrate deep learning by unleashing convolutional neural networks also known as CNN-based deep learning inference, using a common API in order to streamline deep learning inference and deployment, using standard or custom layers without the overhead of frameworks. This includes more than 30 pretrained models ready to use computer vision algorithms or CVAs and support for 100 plus open source and custom models. Speed development using optimized open CV and OpenVX Functions and get started quickly with more than 15 code samples. Finally, innovate and customize, enabling custom kernels to be added into workloads such as video and image processing, computer vision routines, feature extraction and tracking, and OpenCL™. Included within the deep learning deployment toolkit, the model optimizer is a Python* based tool that converts the inputted trained models from standard frameworks into unified IR files. It is important to note that you must have a trained model in order to use the model optimizer. You can use OpenVINO's pretrained models or one of more than 100 open source and public models that OpenVINO supports into the model optimizer. Once the trained model is converted and optimized into an intermediate representation format, it is loaded into the inference engine. The inference engine is a high level API that allows for testing across different accelerators without having to recode. It also allows heterogeneity by providing fallback from custom layers on a FPGA to the CPU or Intel processor graphics. Intel Distribution of OpenVINO toolkit includes optimized pretrained models that can expedite development and improve deep learning inference on Intel processors. Use these models for development and production deployment without the need to search for or train your own models. If you do not have a trained model and a 30 plus pretrained models in the toolkit don't fit your solution needs, check out the Intel® AI builders program or open models zoo for model development and training. Know that Intel is committed to keeping the Intel Distribution of OpenVINO toolkit updated to support your development needs. To see the latest supported frameworks and models, visit software.intel.com/openvinotoolkit. Along with pretrained models, the Intel Distribution of OpenVINO toolkit includes deep learning samples and computer vision algorithms to help users save development time. For deep learning samples, the model optimizer and inference engine can be used, for both public models as well as pretrained models from Intel with these samples. Vision solutions optimized by the Intel Distribution of OpenVINO toolkit have already proven great value across industries. In a Smart Cities case study, Intel Distribution of OpenVINO toolkit enabled a 10 times performance improvement after just three weeks of development. Intel Distribution of OpenVINO toolkit also has proven value and public safety as demonstrated in a stadium surveillance case study that used over 9,000 cameras to protect two million people within 8.3 times increase in performance. A health care case study saw a 14-fold performance increase for classifying aged macular degeneration images. A company in the transportation vertical developed an entrained vision platform that enabled pedestrian and vehicle identification across roads and on train empty seat detection. Smart Retail has used AI vision to retrieve correlative analytical data on top performing traffic, shopper movement, revenue and conversion rates. A company in the safety vertical saw a 2.3 times speed increase for performing face identification for secure facilities. For more great case studies, visit software.intel.com/articles/SDP-case-studies. For whatever you need, Intel offers resources to help partners on their development journey. First, learn more about the Intel Distribution of OpenVINO toolkit's community forum developer resources and hands-on developer workshops by visiting software.intel.com/openvinotoolkit. Next, read more about the OpenVINO toolkit, Open Models Zoo, and Deep Learning Deployment Toolkit on the OpenVINO Toolkit homepage 01.org/openvinotoolkit. Get familiar with the rest of the Intel vision product portfolio by visiting intel.com/visionproducts. You've now completed Part 1 of the Intel Distribution of OpenVINO toolkit training. Next, we'll take a deeper dive into the Intel Distribution of OpenVINO toolkit in Part 2 of the series. Thank you for watching.