We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

The Chinese GDP forecasting from Markov Switching Model

The whole doc is available only for registered users

A limited time offer! Get a custom sample essay written according to your requirements urgent 3h delivery guaranteed

Order Now

Gross Domestic Product (GDP) is a measure of the total output produced in an economy. It is a measure of the size of an economy in terms of economic activity. The Gross Domestic Product (GDP) in China was worth 11199.15 billion US dollars in 2016. The GDP value of China represents 18.06 percent of the world economy. GDP in China averaged 1790.50 USD Billion from 1960 until 2016, reaching an all time high of 11199.15 USD Billion in 2016 and a record low of 47.21 USD Billion in 1962. One of the most commen process used in data analysis is forecasting GDP, in our case is forecasting Chinese GDP (CGDP).

Forecasting is the process of making predictions of the future based on past and present data and most commonly by analysis of trends. A commonplace example might be estimation of some variable of interest at some specified future date.

How Does Forecasting Work?

There is a lot of variation on a practical level when it comes to business cycle forecasting. However, on a conceptual level, all forecasts follow the same process.

1. A problem or data point is chosen. This can be something like “what will our Chinese GDP be in March next year?”
2. Theoretical variables and an ideal data set are chosen. This is where the forecaster identifies the relevant variables that need to be considered and decides how to collect the data.
3. Assumption time. To cut down the time and data needed to make a forecast, the forecaster makes some explicit assumptions to simplify the process.
4. A model is chosen. The forecaster picks the model that fits the data set, selected variables and assumptions.
5. Analysis. Using the model, the data is analyzed and a forecast made from the analysis.
6. Verification. The forecaster compares the forecast to what actually happens to tweak the process, identify problems or in the rare case of an absolutely accurate forecast, pat himself on the back.

Background, Objectives and Motivation of Research

Research Issues:

We can devide our Research Issue to two parts, firstly, the general problems with forecasting and secondly, the technical problems with forecasting, in both case we mention the limits of forecasting, but in the first part, we speak generally about the external impacts related with the macroeconomic and microeconomic levals. In the seond part, we discuss the forecasting limits of previous researches. At the end of this paaragraph, we present our justification and motivation of this study.

General Problems with Forecasting

Forecasting data is very useful for the future growth of the GDP of any country espicially China, as it allows them to plan production, financing and so on. However, there are three problems with relying on forecasts:
1. The data is always going to be old. Historical data is all we have to go on and there is no guarantee that the conditions in the past will persist into the future.
2. It is impossible to factor in unique or unexpected events, or externalities. Assumptions are dangerous, such as the assumptions that banks were properly screening borrows prior to the subprime meltdown, and black swan events have become more common as our dependence on forecasts has grown.
3. Forecasts can’t integrate their own impact. By having forecasts, accurate or inaccurate, the actions of businesses are influenced by a factor that can’t be included as a variable. This is a conceptual knot. In a worst case scenario, management becomes a slave to historical data and trends rather than worrying about what the business is doing now.

Objective and Motivation of our study

The first objective of this study is to involve Machine Learning (ML) forecasting methods into our current field of study which is Econometrics, exactly in forecasting CGDP data. Why? Because econometricians, right now, want to be able to explain observed phenomena. Many, though not all, machine learning techniques (neural network, SVM, ensemble) have a very difficult time quantifying the impact of one variable on the observed phenomena. So instead to study and judge the model carefully. Hence, I think, that’s why econometricians have not adopted machine learning in their research (and being left behind). There’s is also a philosophical difference between machine learning and econometrics.

Econometricians are taught to begin with a theory and build a model to validate/invalidate the theory. Start with the theory, Machine learners start with the data.

Literature and Review

Morteza Salehi Sarbijan (2014) compared the Markov Switching Time Serie Model with the ARIMA model for forecasting the Iranian Economic growth, their conclusion was that Markov switching model, and ARIMA, have offered a better prediction from economic growth of Iran. Melike Bildirici and Özgür Ersin (2014) studied a family of regime switching neural network augmented volatility models and analyzed to achieve an evaluation of forecast accuracy and an application to daily returns in an emerging market stock index presented. Hal R. Varian said in his article «Big Data: New Tricks for Econometrics» (spring 2014) that since computers are now involved in many economic transactions, big data will only get bigger. Data manipulation tools and techniques developed for small datasets will become increasingly inadequate to deal with new problems. An important insight from machine learning is that averaging over many small models tends to give better out-of-sample prediction than choosing a single model (Hal R. Varian 2014).

In 2006, Netfl ix offered a million dollar prize to researchers who could provide the largest improvement to their existing movie recommendation system. The winning submission involved a “complex blending of no fewer than 800 models,” though they also point out that “predictions of good quality can usually be obtained by combining a small number of judiciously chosen methods” (Feuerverger, He, and Khatri 2012). It also turned out that a blend of the best- and second-best submis-sions outperformed either of them.

All of those studies are about forecasting data, different methods but same objective, getting more accurate results. The different methods can be from Econometrics which is statistical methods or Machine Learning (ML) which computational methods, so what is the previous researches said about this? Frankly, this is not a new debate. The difference between computational statistics and statistical computing is just one more analogy to the question above.

Prior to the current Big Data explosion, statistics and computer science behaved in well defined silos at both Universities and organizations. Now there is a convergence between the two –statistics and computer science– to get what is needed to explain why the customer is acting in a particular way and forecast what they will want next. Enter the twin paradigms of econometric modeling and machine learning. At first they seem to have similarities as well as differences. Some techniques like regression modeling are taught in both courses.

Yet they are different by definition- Econometric models are statistical models used in econometrics. An econometric model specifies the statistical relationship that is believed to be held between the various economic quantities pertaining to a particular economic phenomenon under study. On the other hand, Machine learning’s a scientific discipline that explores the construction and study of algorithms that can learn from data. So that makes a clear distinction right?

If it learns on its own from data it is machine learning. If it is used for economic phenomenon it is an econometric model. However the confusion arises in the way these two paradigms are championed. The computer science major will always say machine learning and the statistical major will always emphasize modeling. Since computer science majors now rule at Facebook, Google and almost every technology company, you would think that machine learning is dominating the field and beating poor old econometric modeling.


There are a large number of methods used for forecasting ranging from judgmental (expert forecasting etc.) thru expert systems and time series to causal methods (regression analysis etc.). Most are used to give single point forecast or at most single point forecasts for a limited number of scenarios. We will compare in this study two approaches: one is purely Econometrician which is the Markov Switching Model and the other is completly opposite which is the new ML approach that’s called the LSTM algorithm. This comparaison will focus espicially on the result of the CGDP growth and not really on how the model works, because as we said before, the Econometricians start with the model and judge it, while the Machine Learner start with the data.

1. Markov Switching Approach:

As an alternative to using the simple average growth rates as a measure of cities’ economic performance, we use the Hamilton (1989) Markov-switching model, which describes the economy as switching between business cycle phases (high and low), each with its own average growth rate. Formally, let the growth rate of me measure of
economic activity, y1, be described as follows: where the growth rate of economic activity has mean μst, and deviations from this mean growth rate are created by the stochastic disturbance εt. To capture the two phases, the mean growth rate in (1) is permitted to switch between two phases, where the switching is governed by a latent state variable, St={0,1}.

When St switches from 0 to 1, the growth rate of economic activity switches from ÎĽ0 to ÎĽ0+ÎĽ1. Since ÎĽ0<1 , St switches from 0 to 1 at times when economic activity switches from high-growth to low-growth states, or vice versa. Because St is unobserved, estimation of (1) requires restrictions on the probability process governing St ; in this case, we assume that St is a first-order two-state Markov chain. This means that any persistence in the state is completely summarized by the value of the state in the last period. Under this assumption, the probability process driving St is captured by the transition probabilities

2. The Time Series Forecasting with the Long Short Term Memory (LSTM) in Python:

2.1Experimental Test Setup

We will split the Chinese GDP dataset into two parts: a training and a test set. The first two years of data will be taken for the training dataset and the remaining one year of data will be used for the test set. Models will be developed using the training dataset and will make predictions on the test dataset. A rolling forecast scenario will be used, also called walk-forward model validation. Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step.

Specifically, we will present the follwing points :

How to prepare time series data for developing an LSTM model.

How to develop an LSTM model for time series forecasting.

How to evaluate an LSTM model using a robust test harness.

Models will be developed using the training dataset and will make predictions on the test dataset. A rolling forecast scenario will be used, also called walk-forward model validation. Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step. Finally, all forecasts on the test dataset will be collected and an error score calculated to summarize the skill of the model. The root mean squared error (RMSE) will be used as it punishes large errors and results in a score that is in the same units as the forecast data, namely monthly shampoo sales. So how we will prepare our data?

2.2 LSTM Data Preparation

Before we can fit an LSTM model to the dataset, we must transform the data.

This section is broken down into three steps:

Transform the time series into a supervised learning problem.

Transform the time series data so that it is stationary.

Transform the observations to have a specific scale.

2.3 LSTM Model Development

The Long Short-Term Memory network (LSTM) is a type of Recurrent Neural Network (RNN). A benefit of this type of network is that it can learn and remember over long sequences and does not rely on a pre-specified window lagged observation as input. In Keras, this is referred to as stateful, and involves setting the “stateful” argument to “True” when defining an LSTM layer.

By default, an LSTM layer in Keras maintains state between data within one batch. A batch of data is a fixed-sized number of rows from the training dataset that defines how many patterns to process before updating the weights of the network. State in the LSTM layer between batches is cleared by default, therefore we must make the LSTM stateful. This gives us fine-grained control over when state of the LSTM layer is cleared, by calling the reset_states() function.

The LSTM layer expects input to be in a matrix with the dimensions: [samples, time steps, features]. Samples: These are independent observations from the domain, typically rows of data. Time steps: These are separate time steps of a given variable for a given observation. Features: These are separate measures observed at the time of observation. We have some flexibility in how the Shampoo Sales dataset is framed for the network. We will keep it simple and frame the problem as each time step in the original sequence is one separate sample, with one timestep and one feature.

2.4 LSTM Forecast

Once the LSTM model is fit to the training data, it can be used to make forecasts. Again, we have some flexibility. We can decide to fit the model once on all of the training data, then predict each new time step one at a time from the test data (we’ll call this the fixed approach), or we can re-fit the model or update the model each time step of the test data as new observations from the test data are made available (we’ll call this the dynamic approach).

In this study, we will go with the fixed approach for its simplicity, although, we would expect the dynamic approach to result in better model skill. To make a forecast, we can call the predict() function on the model. This requires a 3D NumPy array input as an argument. In this case, it will be an array of one value, the observation at the previous time step. The predict() function returns an array of predictions, one for each input row provided. Because we are providing a single input, the output will be a 2D NumPy array with one value.

Summary :

In this study we will run our Chinese GDP into the Markov Switching Model then comparing the results with the Long-Short Term Memory forecasting algorithm, the two approaches are completely differents, so we expect different forecast results.The prevalent version was the Markov-switching regression of Goldfeld and Quandt (1973), in which parameters switch between some finite number of regimes, and this switching is governed by an unobserved Markov process.

Hamilton (1989) makes an important advance by extending the Markov-switching framework to an autoregressive process, and providing an iterative filter that produces both the model likelihood function and filtered regime probabilities. Hamilton’s paper initiated a large number of applications of Markov-switching models, and these models are now a standard approach to describe the dynamics of many macroeconomic and financial time series. So in our study we want to try the new algorithm of machine learning and discuss the results.

Outlines :
Experimental Test Setup
Persistance Model Forecast
LSTM Data Preparation
LSTM Model Development
LSTM Forecast
Develop a Robust Result


Morteza Salehi Sarbijan (2014). The Markov Switching Time Serie Model with the ARIMA model for forecasting the Iranian Economic growth. [International Journal of Scientific Research in Knowledge].
Hal R. Varian (Spring 2014). Big Data: New Tricks for Econometrics. [Journal of Economic Perspectives—Volume 28, Number 2—Spring 2014—Pages 3–28]
Melike Bildirici and Ă–zgĂĽr Ersin (2014). Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns.[Hindawi Publishing Corporation the Scientific World Journal Volume 2014, Article ID 497941, 21 pages http://dx.doi.org/10.1155/2014/497941]
Amir Ghaderi, Borhan M. Sanandaji and Faezeh Ghaderi. Deep Forecast:Deep Learning-based Spatio-Temporal Forecasting.[To the memory of Maryam Mirzakhani (1977-2017)].
Chollet, François et al. Keras. https://github.com/fchollet/keras, 2015.
Feuerverger, Andrey, Yu He, and Shashi Khatri. 2012. “Statistical Signifi cance of the Netfl ix Challenge.” Statistical Science 27(2): 202–231.
Hamilton, J. D. (1989) “A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle.” Econometrica 57, 357-384.

Related Topics

We can write a custom essay

According to Your Specific Requirements

Order an essay
Materials Daily
100,000+ Subjects
2000+ Topics
Free Plagiarism
All Materials
are Cataloged Well

Sorry, but copying text is forbidden on this website. If you need this or any other sample, we can send it to you via email.

By clicking "SEND", you agree to our terms of service and privacy policy. We'll occasionally send you account related and promo emails.
Sorry, but only registered users have full access

How about getting this access

Your Answer Is Very Helpful For Us
Thank You A Lot!


Emma Taylor


Hi there!
Would you like to get such a paper?
How about getting a customized one?

Can't find What you were Looking for?

Get access to our huge, continuously updated knowledge base

The next update will be in:
14 : 59 : 59