# Make your time series computations up to 20 times faster

### Introduction

The key to getting the most performance from a matrix language is to vectorize your code as much as possible. Vectorized code performs operations on large sections of matrices and vectors in a single operation, rather than looping over the elements one-by-one.

For example, we could scale a vector by looping over each element:

// Create a random normal vector
// with 10 elements
x = rndn(10,1);

// NOT Vectorized
// Loop through each element
// in 'x' and multiply by 3
for i(1, rows(x), 1);
x[i] = x[i] * 3;
endfor;

or we could perform all of the multiplications in a single call:

// Vectorized
// Scale each element of 'x'
// with a single operation
x = x * 3;

Not only is the second version much shorter, but it is also approximately 10 times faster.

## Challenges to vectorizing time series code

Our initial example cut the lines of code by one third and ran 10 times more quickly. Clearly, this technique is valuable, but not all cases are as simple. Let's start by looking at the standard AR(1) model:

$$y_t = C + \rho y_{t-1} + \epsilon_t$$

Let's start by writing a non-vectorized version of this equation:

// Number of observations to simulate
nobs = 5;

// Constant initial value
C = 1;

// AR parameter
rho = 0.3;

// Set random seed for repeatable random numbers
rndseed 5423;

// Pre-allocate vector for the output 'y'
y = C | zeros(nobs, 1);

// Create error term
epsilon = rndn(rows(y), 1);

// Simulate the AR(1) process
for t(2, nobs + 1, 1);
y[t] = y[t-1] * rho + epsilon[t];
endfor;

## Loop-carried dependencies

The problem which prevents us from vectorizing this code with the plus and multiplication operators (or parallelizing it) is that each iteration of the loop depends upon the result from the previous iteration. This is the definition of a loop-carried dependence.

## Vectorizing an AR process with recserar

Fortunately, GAUSS has a built-in function for exactly this type of problem, recserar. In terms of an AR(1) model, recserar has the following inputs:

epsilon
A Tx1 matrix, the error term.
C
A scalar, the initial value of the time series.
rho
A scalar, the AR parameter.

Using recserar the code would look like this:


// Number of observations to simulate
nobs = 5;

// Constant initial value
C = 1;

// AR parameter
rho = 0.3;

// Set random seed for repeatable random numbers
rndseed 5423;

// Create error term
epsilon = rndn(nobs+1, 1);

// Simulate the AR(1) process
y_vec = recserar(epsilon, C, rho);

## Results

The speed-up that you see from this change is mainly a function of the length of the number of elements in the vector. In the testing for this post, the recserar version ran up to around 20 times faster as shown in the graph above.

### Conclusion

In this post we saw how the GAUSS recserar function can be used to vectorize code with loop-carried dependencies like the simulation of an AR(1) model, giving speed-ups of up to 20 times. This pattern comes up often in other time series models such as GARCH. However, it is not exclusive to time series, so keep it in mind whenever you find a loop that is difficult to vectorize.

Code and data from this blog can be found here.

Yes
No

### Have a Specific Question?

Get a real answer from a real person

### Need Support?

Get help from our friendly experts.