The course examines the construction and the performance analysis of deep neural networks using the Intel® neon™ Framework.

The following topics are covered:

- Introduction to deep learning.
- Multilayered fully-connected neural networks.
- Introduction to the Intel® neon™ Framework.
- Convolutional neural networks. Deep residual networks.
- Transfer learning of deep neural networks.
- Unsupervised learning: autoencoders, deconvolutional networks.
- Recurrent neural networks.
- Introduction to the Intel® nGraph™.

The course is practice oriented. There are 8 lectures (1.5 hours each) and 5 individual consultations in groups of 2-3 people (for each group). Lectures are held in plain lecture or master class (tutorial) form. The presentation of the theoretical material in most lectures/master classes is supported by examples of developing a deep neural network architecture using the Intel® neon™ Framework. The problem for which deep models are constructed is comprehensive and covers the entire lecture part, with the exception of an introductory lecture of a survey nature. The practice of the course is structured as follows: students are divided into groups of 2-3 people. Each group chooses a separate problem and tries to achieve the maximum quality by constructing different types of deep architectures and modifying their internal structure. Students follow the provided tutorials that represent step-by-step deep model development using the Intel® neon™ Framework. The mode of collective development is simulated. The final control of knowledge assumes presentation of the developed project with demonstration of quality/performance measurements of proposed deep neural networks.

The course is aimed at engineers, teachers and researchers, as well as postgraduate students and students of higher educational institutions.

The course is aimed at students who have basic programming skills in the scripting programming language Python. Along with this, the course requires theoretical knowledge in the field of optimization methods, probability theory, image processing and computer vision.

Syllabus is available here.

Sources for all practical classes to solve the task considered at the lectures are available here. Experimental results are available here.

The list of practical tasks for solving by the group of students is available here.

The licence is available here.

**Kustikova Valentina Dmitrievna**, Phd, Prof. Assistant, department of Computer software and supercomputer technologies, Institute of Information Technologies, Mathematics and Mechanics, Nizhny Novgorod State University. Lead and developer.

**Zolotykh Nikolai Yurievich**, Dr., Prof., department of Algebra, geometry and discrete mathematics, Institute of Information Technologies, Mathematics and Mechanics, Nizhny Novgorod State University. Scientific adviser.

**Zhiltsov Maxim Sergeevich**, master of the 1st year training, Institute of Information Technology, Mathematics and Mechanics, Nizhny Novgorod State University. Developer.

**The course development is supported by Intel Corporation.**

**LECTURE 1. Introduction to deep learning**

The notion of deep learning. Biological fundamentals of deep learning. Examples of practical problems. Classification of deep models.

**LECTURE 2. Multilayered fully-connected neural networks**

The structure of fully-connected neural networks (FCNN), types of activation functions. Training problem of FCNN, loss function. Backpropagation method.

**PRACTICE 0. Preprocessing and converting data to HDF5 format for the Intel® neon™ Framework**

Preliminary practice to prepare dataset for the subsequent practical classes.

(docx)

**LECTURE 3. Introduction to the Intel® neon™ Framework**

Introduction to the Intel® neon™ Framework. Installation. The structure of application for training/testing of the single-layer fully-connected neural network using the Intel® neon™ Framework.

**PRACTICE 1. The development of fully-connected neural networks using the Intel® neon™ Framework**

Problem statement for the laboratory works. Development of architectures of fully-connected networks with a different number of hidden layers and the number of hidden elements on each layer. Developing scripts for training/testing the proposed architectures. Carrying out experiments, collecting performance results.

(docx)

**LECTURE 4. Convolutional neural networks. Deep residual networks**

The structure of convolutional layer and network. Example of training/testing a single-layer convolutional network using the Intel® neon™ Framework. Deep residual networks, a typical structural block, an example of a residual network.

**PRACTICE 2. The development of convolutional neural networks using the Intel® neon™ Framework**

Development of convolutional network architectures with different number of hidden layers and filter parameters on each layer. Developing scripts for training/testing the proposed architectures. Carrying out experiments, collecting performance results.

(docx)

**LECTURE 5. Transfer learning of deep neural networks**

Description of the general approach underlying the transfer learning in deep neural networks. An example of transfer learning application using the Intel® neon™ Framework.

**PRACTICE 3. Application of transfer learning to solve a given problem using the Intel® neon™ Framework**

Selection of the original problem (connected with a given problem) and a corresponded trained model. Modification of the network architecture for a given problem. Complete learning of the parameters of all network layers with arbitrary initialization. Training of all layers of parameters of all layers of the network with initialization, obtained as a result of learning the model to solve the original problem. Learning only the last layers (modified) of the network with initial initialization, obtained as a result of learning the model to solve the original problem.

(docx)

**LECTURE 6. Unsupervised learning: autoencoders, deconvolutional networks**

Unsupervised learning methods. The concept of an autoencoder, a stack of autoencoders, deconvolutional networks.

**PRACTICE 4. Initial pre-training the weights of the most perspective architectures of fully-connected networks for the subsequent solution of a given problem in supervised style using the Intel® neon™ Framework**

Selection of several fully-connected neural networks. Developing a stack of autoencoders. Training of the developed architectures. Application of the obtained initial weights for training the network in supervised style to solve a given problem.

(docx)

**LECTURE 7. Recurrent neural networks**

The general structure of the model. Deploying a recurrent network in time. Recurrent networks training. Long short-term memory network. An example of training/testing a simple recurrent network using the Intel® neon™ Framework.

**PRACTICE 5. The development of recurrent neural networks using the Intel® neon™ Framework**

Development of architectures of recurrent neural networks with a different number of hidden layers and the number of hidden elements on each layer. Developing scripts for training/testing the proposed architectures. Carrying out experiments, collecting performance results.

(docx)

**LECTURE 8. Efficient execution of neural networks. The Intel® nGraph™ overview**

Introduction to the Intel® nGraph™. The neon™ frontend to Intel® nGraph™.