For the course Deep Learning for Business, we are now going to go into, Deep Learning Project with TensorFlow Playground. First, we'll start with the introduction to TensorFlow Playground. The TensorFlow Playground is a web application written in d3 JavaScript. It is the best application to learn about neural networks without math. And in your web browser, you can create a neural network and immediately see your results. It is licensed under the Apache License 2.0, the creators of it. In addition, the people that did the previous work as well as contributing members and contributing teams. My personal, thank you so much for making this great web application. Now let's go to the website playground.tensorflow.org which will lead you to this page. On the top part is the menu which includes the Epoch, Learning rate, Activation, Regularization rate, and Problem type. Every time training is conducted for a whole training set, the Epoch number increases as you can see over there. The learning rate determines the learning speed. Therefore, we need to select the proper learning rate, and you can do that using the menu over here. The activation function type needs to be selected as well. More information on the activation functions are in the lecture, Module 4. As you can see, these are the options that are available. To give you a brief review of this, first, the artificial neural network neuron was modeled like this. The Activation Function with Hard Output is what you see right here. The system works like this. Over there, you can see the inputs and the weights. They are multiplied, and then summed together. To result in a value y, the value y is compared to T which is the threshold, and if it is larger than T, then the output is 1. Otherwise, it will be just 0. Now, this activation function that you see right here is making a decision of either 1 or 0. Therefore, we will call it the Hard Output Activation Function. There are also Soft Output Activation Functions which include the Rectified Linear Unit, ReLU, the Hyperbolic Tangent, the Sigmoid, and the Linear one. Among these, the ReLU is actually the most popular. As you can see the characteristics from here, if y is less than zero then the output is just zero. However, if y is above the value of zero, then its actual y value becomes the output. That is what the Rectified Linear Unit is doing. The word "rectified" means` that below zero, it's going to cut it off, chop it off. Basically, that's how it's used. Now we look into the regularization, which is used to prevent overfitting. TensorFlow Playground provides two types of regularization: L1 and L2. Let's look at this. Regularization slowly increases or reduces the weight of the strong and weak connections, to make the pattern classification sharper. L1 and L2 are the most popular regularization methods. In addition, Dropout is also a regularization method. This is taught in Module 5 of this course. Looking into L1 and L2, L1 is effective in sparse feature spaces, where there is a need to select a few among many values. L1 will make selections and assign big weight values, and it will make weights of the non-selected ones to become very small, or become zero. L2 is effective with inputs that are correlated. L2 will control the weight values corresponding to the level of correlation. A higher regularization rate will make the weights more limited in range. These are the values that we will need to select from. The next one is to select the problem type. We get to select among Classification and Regression, from this menu right here. Then we will need to select the type of data set. We will do that using that over there, which will result in this type over here. We get to see what type of problem we're going to solve based upon the data set that we specify right here. Overall, there are four types of Classifications and there are two types of Regression problems that exist. The blue and orange dots form data sets. An orange dot has a value of -1, and a blue dot has a value of +1. Using the ratio of training to test data, the percentage of the training set can be controlled using that control module over there. For example, if it's 50%, you can see the dots over here. Where, if you control that to make it 10%, you can see the dots over there becoming much less. The noise level of the data can be set and controlled. You can do that using the control module over there. The data pattern becomes more irregular as the noise increases. As you can see over here, when the noise is zero the problem data is very clearly distinguished in their regions. However, by making it over there to 50, you can see that the blue dots and the orange dots get all mixed up, making it very challenging to classify. The batch size determines the data amount to use for each training iteration, and you can control it using that over there. Now we need to do the feature selection. This will use x1 and x2 which are over there, among the many that actually exist. x1 is a value on the horizontal axis, and x2 is a value on the vertical axis. To give you an example, we look at this. The dot over here has approximately an x1 value of 3.1 and an x2 value of 4, like you can see right here. The hidden layer structure is listed over here. We can have up to six hidden layers. You can control the number of hidden layers by adding a hidden layer, by clicking on that plus sign. Also, you can have up to eight neurons per hidden layer. You can control this by clicking on the plus sign over there, to add a neuron to a hidden layer. By pressing the arrow button, this will start the neural network training. The Epoch will increase by one, and backpropagation will be used to train the neural network. In addition, if you need to refresh the overall training, then you can do that by clicking on the refresh button. The neural network will minimize the Test Loss and the Training Loss. The Test Loss and Training Loss changes will be shown in a small performance curve that will be located right here. The Test Loss will have a black performance curve, and the Training Loss will have a grey performance curve, which we will soon see. If the Loss is reduced, the curves, of course, will go down. We will see that soon, in this position right here. These are the references that I used, and I recommend them to you.