In this set of videos, we'll begin to introduce, how we can actually leverage deep learning techniques for our time series modeling. Let's go over the learning goals for this section. In this section, we're going to cover why deep learning is useful for time series forecasting, and how can it actually automatically learn a lot of what we've discussed so far with the autocorrelations and the seasonality. We'll discuss with that the pros and cons of the deep learning approach, and why it may not be a one size fits all, and we should just always turn to deep learning. We'll then discuss it using recurrent neural networks and how we can leverage those in order to do time series model. Then we'll discuss the, way that LSTMs or long short-term memory units can help improve on those simple RNNs, especially when we want longer term memory. Then finally, we'll close out to show you how to use Python and Keras, to apply deep learning-based time series models. Why move to deep learning? Now, neural networks actually offers several benefits over traditional time series forecasting models. First of all, deep learning techniques can automatically learn how to incorporate series characteristics like trend, seasonality, and autocorrelation into its predictions. They can do so without having to put in something like the anticipated order of P&Q in order to find that autocorrelation structure, or the seasonal frequency that we have to pass into our ARIMA models. Instead, the deep learning techniques, can automatically figure out how to pick up these complex structures. These complex patterns can, include many types of autocorrelations, so not just like two, like three, but something perhaps a bit more complex, where they correlate with one another, as well as multiple seasonalities. It's not limited to just one level of seasonalities, but perhaps there's weekly patterns, monthly patterns, and yearly patterns within our data that we can pick up when using deep learning techniques. Then finally, deep learning techniques can simultaneously model many related series instead of treating each one separately. If there are similar patterns in related time series, we can actually model to learn these similarities across all the series that we have in our dataset. With that, it's also important to note, why we may not want to use deep learning models. This neural network benefits, don't come for free. First of all, the models tend to be complicated to builds, as well as computationally expensive as we've seen in our prior lessons, how long it may take to learn a deep learning model. Although GPUs can somewhat help with this if you do have that available. Also, deep learning models often overfit, so although it can come up with these complex models, those complex models often come at the price of overfitting to the data. It's very challenging to explain or interpret predictions made by the model, it's somewhat of a black box model. If we think about our smoothing and our seasonal ARIMA models, we know how much we are relating to certain trends and seasonality, as well as the autocorrelation structure that's built in, which is much more difficult to explain when we're leveraging deep learning models. Finally, in order to find these complex patterns without overfitting, deep learning models tend to perform best with much larger training datasets. We're going to start here with an RNN is, or a recurrent neural network. The main deep learning technique that we will leverage, when working with time series will be within this recurrent neural network family. Much of what we'll discuss will actually be a bit of review from our discussion from that deep learning course, if you went over those videos, just now applied to time series data. Recurrent neural networks map a sequence of inputs to a predicted output. The most common format for an RNN model, will be a many-to-one that maps an input sequence of many values to one output value. If thinking in regards to time series, this means taking some sequence, and then just predicting one value into the future. Within our RNN structure, the input at each time-step sequentially updates the recurrent neural nets cell's hidden state, or its memory, so that as discussed, it can learn these built-in patterns automatically. Then after processing the input sequence, the hidden state information is used to predict that single output. Setup will be that we pass into sequence of set length, here of length five going from input 0 to input 4. That will be a single input for our training set. We can do this across the history of our time series, to learn the relationships, for multiple time series chunks of five, five in this case, and that can be larger chunks as well, so that we end up with a training set with multiple sequences of length five. Now, those inputs are all going to be passed through our recurrent neural network, so that can learn the patterns and seasonality built into our sequences. Then finally, for each one of those training sequences, there'll be a single output for which it is trying to optimize the prediction. That value is going to be the very next value within the sequence. Our features are time steps, t_0 through t_4, and then our outcome variable Y is the value at t_5. So that gives us the basics of how the recurrent neural networks, with time series. In the next video, we'll continue by diving a bit further under the hood. I'll see you there.