<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Simulation &#8211; Aptech</title>
	<atom:link href="https://www.aptech.com/blog/category/simulation/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aptech.com</link>
	<description>GAUSS Software - Fastest Platform for Data Analytics</description>
	<lastBuildDate>Tue, 14 Jan 2025 16:50:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Fundamental Bayesian Samplers</title>
		<link>https://www.aptech.com/blog/fundamental-bayesian-samplers/</link>
					<comments>https://www.aptech.com/blog/fundamental-bayesian-samplers/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Wed, 12 Jun 2019 04:01:07 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[Simulation]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=20438</guid>

					<description><![CDATA[The posterior probability distribution is the heart of Bayesian statistics and a fundamental tool for Bayesian parameter estimation. Naturally, how to infer and build these distributions is a widely examined topic, the scope of which cannot fit in one blog. In this blog, we examine bayesian sampling using three basic, but fundamental techniques, importance sampling, Metropolis-Hastings sampling, and Gibbs sampling.]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>The posterior probability distribution is the heart of Bayesian statistics and a fundamental tool for Bayesian parameter estimation. Naturally, how to infer and build these distributions is a widely examined topic, the scope of which cannot fit in one blog. </p>
<p>We can, however, start to build a better understanding of sampling by examining three basic, but fundamental techniques: </p>
<ul>
<li>The importance sampler</li>
<li>Metropolis-Hastings sampler</li>
<li>Gibbs sampler</li>
</ul>
<p><b>These samplers fall into the family of Monte Carlo samplers</b>. Monte Carlo samplers:</p>
<ul>
<li>Use repeated random draws to approximate a target probability distribution. </li>
<li>Produce a sequence of draws that can be used to estimate unknown parameters. </li>
</ul>
<p><b>The Metropolis-Hastings and Gibbs samplers are also both Markov Chain Monte Carlo (MCMC)</b>. Markov Chain Monte Carlo samplers:</p>
<ul>
<li>Are a type of Monte Carlo sampler.  </li>
<li>Build a Markov Chain by starting at a specific <em>state</em> and making random changes to the state during each iteration. </li>
<li>Result in a Markov Chain that converges to the target distribution. </li>
</ul>
<h2 id="the-importance-sampler">The importance sampler</h2>
<p>Importance samplers use weighted draws from a proposed <em>importance distribution</em> to approximate characteristics of a different target distribution. </p>
<p>Importance sampling is useful when the area we are interested in may lie in a region that has a small probability of occurrence. In these cases, other sampling techniques may fail to even draw from that area.</p>
<p>Importance sampling overcomes this issue by sampling from a distribution which overweights the region of interest.</p>
<h3 id="preparing-for-sampling">Preparing for sampling</h3>
<p>In order to implement the importance sampler we must decide on our <em>importance distribution</em> and our <em>target distribution</em>. The importance distribution is what we will draw from during each iteration. </p>
<p>There are a few things to consider when deciding on our importance distribution (Koop, Poirier &amp; Tobias, 2007):</p>
<ul>
<li>The fewer draws you make, the more crucial it is to have an importance distribution that is a close approximation to the target distribution.</li>
<li>In general, importance distributions should have fatter tails than the target distribution. For example, today we will use a t-distribution as our importance distribution for estimating the mean and variance of a standard normal distribution. </li>
<li>Choosing an importance distribution is not as obvious or simple as our toy example suggestion. In reality, there are statistical techniques to help choose the optimal importance distribution. </li>
</ul>
<h3 id="the-algorithm">The algorithm</h3>
<p>The importance sampler algorithm iterates over the following steps:</p>
<ol>
<li>Draw a single sample, $X$, from the importance distribution. For example, let's suppose we draw -0.01014 from a $t(0, 1, 2)$ distribution.</li>
<li>Calculate the importance sampling weight. This is a likelihood ratio of the probability of $X$ being drawn from the target distribution, $p(x)$, and probability of $X$ being drawn from the importance distribution, $q(X)$.<br />
$$p(X) = \text{Probability of -0.010139 from } N(0, 1) = 0.3989$$
$$q(X) = \text{Probability of -0.010139 from } t(0, 1, 2) = 0.3535$$
$$w = \frac{p(X)}{q(X)} = \frac{0.3989}{0.3535} = 1.1284$$</li>
<li>Multiply draw $X$ by the importance weight.
$$ w * X = 1.1284 * -0.01014 = -0.01144 $$</li>
<li>Repeat steps 1-3 until a sample of the desired size is created.</li>
<li>Use the sample collected in steps 1-4 to compute the expected value and variance of the <em>target distribution</em>.</li>
</ol>
<h3 id="example">Example*</h3>
<p>Now let's use the iterative importance sampler to estimate the mean and variance of the standard normal distribution, $N(0,1)$.</p>
<p><strong>We can implement this importance sampler</strong> using the <code>importanceSamplerTDist</code> function found in the <strong>GAUSS</strong> <a href="https://github.com/aptech/gauss-sampler-library">samplerlib library</a>.</p>
<p>This function takes one required input and three optional inputs:</p>
<hr>
<dl>
<dt>keep_draws</dt>
<dd>Scalar, the total number of draws to be kept.</dd>
<dt>dof_is</dt>
<dd>Optional input, Scalar, the degrees of freedom of the t-distribution. Default = 2.</dd>
<dt>mean_is</dt>
<dd>Optional input, Scalar, the mean of the importance function. Default = 0.</dd>
<dt>scale_is</dt>
<dd>Optional input, Scalar, the scale factor of the importance function. Default = 1.
<hr></dd>
</dl>
<p><strong>We'll run the <code>importanceSamplerTDist</code></strong> using 50,000 repetitions and the default <code>dof_is</code>, <code>mean_is</code>, and <code>scale_is</code>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">new;
library samplerlib;

rndseed 34532;

// Number of iterations 
keep_draws = 50000;

// Call the function
struct importanceOut iOut;
iOut = importanceSamplerTdist(keep_draws);</code></pre>
<p><strong>The <code>importanceSamplerTDist</code> function returns</strong> a filled instance of the <code>importanceOut</code> structure. The structure <code>iOut</code> contains the following relevant members:</p>
<hr>
<dl>
<dt>iOut.theta_draw_is</dt>
<dd>Vector, sequence of draws from the importance sampler.</dd>
<dt>iOut.theta_mean_is</dt>
<dd>Scalar, importance sampler posterior mean.</dd>
<dt>iOut.theta_std_is</dt>
<dd>Scalar, importance sampler posterior standard deviation.</dd>
<dt>iOut.w</dt>
<dd>Vector, the importance weights.</dd>
<dt>iOut.wmean</dt>
<dd>Scalar, the mean of importance sampling weights.</dd>
<dt>iOut.wstd</dt>
<dd>Scalar, standard deviation of importance sampling weights.
<hr></dd>
</dl>
<p>The function prints the posterior distribution mean and variance-covariance matrix as output:</p>
<pre>Importance Sampling Posterior Mean and Standard Deviation
  -0.00064283983       0.99621825

Mean and standard deviation of importance sampling weights
       1.0000216       0.38137929 </pre>
<p>We can see that our estimated mean of -0.0006 and variance of 0.99 are fairly close to the true mean of 0 and variance of 1. </p>
<p>Additionally, the mean and standard deviation of our importance weights provide some insight into the relationship between our importance distribution and the target distribution. </p>
<h3 id="advantages">Advantages</h3>
<p>The importance sampler:</p>
<ul>
<li>Allows us to solve problems that may not be feasible using other sampling methods.   </li>
<li>Can be used to study one distribution using samples generated from another distribution. </li>
<li>By choosing the appropriate importance distribution we can sample from interesting or important regions other Monte Carlo sampling techniques may overlook.</li>
<li>Examples include Bayesian inference, rare event simulation in finance or insurance, and high energy physics. </li>
</ul>
<h3 id="disadvantages">Disadvantages</h3>
<p>There are a few disadvantages of the importance sampler to consider:</p>
<ul>
<li>It does not work well in high dimensions. The variance of the samples increases as the dimensionality increases. </li>
<li>It can be difficult to identify a suitable importance distribution that is easy-to-sample from. </li>
</ul>
<h2 id="gibbs-sampling">Gibbs Sampling</h2>
<p>Gibbs sampling is a Markov Chain Monte Carlo technique used to sample from distributions with at least two dimensions.</p>
<p>The Gibbs sampler draws iteratively from posterior conditional distributions rather than drawing directly from the joint posterior distribution. By iteration, we build a chain of draws, with each current draw depending on the previous draw. </p>
<p>The Gibbs Sampler is particularly useful, as the joint posterior is not always easy to work with. However, we can solve the joint estimation problem as a series of smaller, easier estimation problems.</p>
<h3 id="preparing-for-sampling-1">Preparing for sampling</h3>
<p>Before running the Gibbs sampler we must find the joint posteriors. Today we are going to use the Gibbs sampler to estimate two parameters from a bivariate normal posterior distribution such that</p>
<p>$$ \begin{eqnarray*} \begin{pmatrix} \theta_1\\ \theta_2 \end{pmatrix} & \sim & N\left[\left(\begin{array}{c} 0\\ 0 \end{array}\right),\left(\begin{array}{ccc} 1 & \rho\\ \rho & 1 \end{array}\right)\right]\\ \end{eqnarray*} $$</p>
<p>where $\theta_1$ and $\theta_2$ are unknown parameters of the model, while $\rho$ is the known posterior correlation between $\theta_1$ and $\theta_2$. </p>
<p>Using the properties of the multivariate normal distribution we can define the conditional distributions necessary for our sampler</p>
<p>$$\theta_1|\theta_2,\: y \sim N(\rho\theta_2,\: 1-\rho^2) \sim \rho\theta_2 + \sqrt{1-\rho^2}N(0,\:1)$$</p>
<p>and</p>
<p>$$\theta_2|\theta_1,\: y \sim N(\rho\theta_1,\: 1-\rho^2) \sim \rho\theta_1 + \sqrt{1-\rho^2}N(0,\:1) .$$</p>
<h3 id="the-algorithm-1">The algorithm</h3>
<p>The Gibbs sampler algorithm is dependent on the number of dimensions in our model. Let's consider a bivariate normal sampler with two parameters, $\theta_1$ and $\theta_2$. </p>
<ol>
<li>Determine the conditional distributions for each variable, $\theta_1$ and $\theta_2$.</li>
<li>Choose a starting values $\theta_1^{(0)}$ and $\theta_2^{(0)}$.
$$\theta_1^{(0)} = 0$$
$$\theta_2^{(0)} = 0$$</li>
<li>Draw $\theta_2^{(r)}$ from $\:p(\theta_2|y,\:\theta_1^{(r-1)})$.
$$\theta_2^{(1)}|\theta_1^{(0)} = \rho\theta_1^{(0)} + \sqrt{1-\rho^2}N(0,\:1) =\\ \rho*0 + \sqrt{1-\rho^2}N(0,\:1) = -0.15107$$.</li>
<li>Draw $\theta_1^{(r)}$ from $\:p(\theta_1|y,\:\theta_2^{(r)})$.<br />
$$\theta_1^{(1)}|\theta_2^{(1)} = \rho\theta_2^{(1)} + \sqrt{1-\rho^2}N(0,\:1) =\\ \rho*-0.15107 + \sqrt{1-\rho^2}N(0,\:1) = -0.65117$$.</li>
<li>Repeat 2-3 for desired number of iterations.</li>
</ol>
<h3 id="example-1">Example*</h3>
<p><strong>To implement this Gibbs sampler</strong> we can use the <code>gibbsSamplerBiN</code> function found in the <strong>GAUSS</strong> <a href="https://github.com/aptech/gauss-sampler-library">samplerlib library</a>.</p>
<p>This function takes two required inputs and three optional inputs:</p>
<hr>
<dl>
<dt>keep_draws</dt>
<dd>Scalar, the total number of draws to be kept.</dd>
<dt>rho</dt>
<dd>Scalar, the correlation parameter.</dd>
<dt>burn_in</dt>
<dd>Optional input, Scalar, the number of burn-in iterations. Default = 10% of kept draws.</dd>
<dt>theta_1_init</dt>
<dd>Optional input, Scalar, the initial value of $\theta_1$. Default = 0.</dd>
<dt>theta_2_init</dt>
<dd>Optional input, Scalar, the initial value of $\theta_2$. Default = 0.
<hr></dd>
</dl>
<p><strong>We'll run the <code>gibbsSamplerBiN</code></strong> using $\rho = 0.6$, 1000 burn-in iterations, and keeping 10,000 repetitions:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">new;
library samplerlib;

// Set random seed for repeatable random numbers
rndseed 45352;

/*
** Gibbs Sampler Specifications
** Known correlation parameter
*/
rho = 0.6;      

// Burn-in for the Gibbs sampler
burn_in = 1000;        

// Draws to keep from sampler
keep_draws = 10000;  

// Call Gibbs sampler
struct gibbsBiNOut gOut;
gOut = gibbsSamplerBiN(keep_draws, rho, burn_in);</code></pre>
<p><strong>The <code>gibbsSamplerBiN</code> function returns</strong> a filled instance of the <code>gibbsBiNOut</code> structure. The structure <code>gOut</code> contains the following relevant members:</p>
<hr>
<dl>
<dt>gOut.theta_keep_gibbs</dt>
<dd>Matrix, sequence of kept draws from the Gibbs sampler.</dd>
<dt>gOut.theta_mean_gibbs</dt>
<dd>Matrix, Gibbs sampler posterior mean.</dd>
<dt>gOut.theta_std_gibbs</dt>
<dd>Scalar, Gibbs sampler posterior standard deviation.
<hr></dd>
</dl>
<p>The functions prints the posterior distribution mean an variance-covariance matrix as output:</p>
<pre>---------------------------------------------------
Variance-Covariance Matrix from Gibbs Sampler
---------------------------------------------------
Sample Mean:
    0.0060170250
    0.0032377421

Sample Variance-Covariance Matrix:
      0.97698091       0.58179239
      0.58179239       0.98180547 </pre>
<p>In this case, we actually knew the true standard values of our correlation parameter and means, which gives us some insight into the performance of our sampler. </p>
<p>Note that our sample estimates $\hat{\rho} = 0.58179239$ while our true correlation parameter is $\rho = 6$. In addition, the true $\theta_1$ and $\theta_2$ were both equal to 0, while our sampler estimates $\hat{\theta}_1=0.006$ and $\hat{\theta}_2=0.003$.</p>
<h3 id="advantages-1">Advantages</h3>
<p>The Gibbs sampler is a powerful sampler with a number of advantages:</p>
<ul>
<li>Joint distributions may be complex to draw directly from but we may be able to sample directly from less complicated conditional distributions. </li>
<li>Conditional distributions are lower in dimension than joint distributions and may be more suitable for applying other sampling techniques.</li>
</ul>
<h3 id="disadvantages-1">Disadvantages</h3>
<p>The Gibbs sampler is not without disadvantages, though:</p>
<ul>
<li>To perform the Gibbs sampler we must be able to find posterior conditionals for each of the variables.</li>
<li>As the correlation between variables increases, the performance of the Gibbs sampler decreases. This is because the sequence of draws from the Gibbs sampler becomes more correlated. </li>
<li>Even if we can extract the conditional distributions they may not be known forms, so we could not draw from them. </li>
<li>Drawing from multiple conditional distributions may be slow and inefficient. </li>
</ul>
<h2 id="metropolis-hastings-sampler">Metropolis-Hastings sampler</h2>
<p>Like the Gibbs sampler, the Metropolis-Hastings sampler is a MCMC sampler. While the Gibbs sampler relies on conditional distributions, the Metropolis-Hastings sampler uses a full joint density distribution to generate a candidate draws. </p>
<p>The candidate draws are not automatically added to the chain but rather an acceptance probability distribution is used to accept or reject candidate draws. </p>
<h3 id="preparing-for-sampling-2">Preparing for sampling</h3>
<p>In order to implement the Metropolis-Hastings sampler two things must be chosen:</p>
<ul>
<li>The <strong>proposal distribution</strong> which defines the probability of a candidate draw given the previous draw. In our example we will use the normal distribution,  $ N\big(0,\text{ }d^2\big)$, to generate the candidate draws. </li>
<li>The <strong>acceptance probability</strong> which is the probability that we accept the candidate draw. In our example we will use</li>
</ul>
<p>$$ \alpha\big(\theta^{(r-1)}, \theta^*\big) = min \bigg[exp\Big[\frac{1}{2}\Big( | \theta^{(r-1)}| - | \theta^*| - \bigg(\frac{\theta^{(r-1)}}{d}\bigg)^2 + \bigg(\frac{\theta^*}{d}\bigg)^2\Big) \Big],1\bigg]$$</p>
<p>where $d$ is the performance parameter, which we will set to $d=6$. </p>
<h3 id="the-algorithm-2">The algorithm</h3>
<p>The general Metropolis-Hastings algorithm can be broken down into simple steps:</p>
<ol>
<li>Choose a starting value for $\theta$.
$$\theta^{(0)} = 1$$</li>
<li>Draw $\theta^*$ from the candidate generating density.
$$\theta^* = N\big(0,\text{ }36\big) = -1.3847$$</li>
<li>Calculate the acceptance probability $\alpha(\theta^{(r-1)}, \theta^*)$.<br />
$$ \alpha \big( 1, -1.3847 \big) = min \bigg[ exp \Big[ \frac{1}{2} \Big( |1| - |-1.3847| - \bigg( \frac{1}{6} \bigg)^2 + \bigg( \frac{-1.3847}{6} \bigg)^2 \Big) \Big], 1 \bigg] = 0.8356$$</li>
<li>Set $\theta^{(r)} = \theta^*$ with probability $\alpha(\theta^{(r-1)}, \theta^*)$, or else set $\theta^{(r)} = \theta^{(r-1)}$.
$$ 0.8356 > U(0,1) \rightarrow \theta^{(1)} = -1.3847$$</li>
<li>Repeat steps 3-5 for $r = 1, ... , R$.</li>
</ol>
<h3 id="example-2">Example*</h3>
<p>In this example we will use the Metropolis-Hastings sampler to make inference about a parameter, $\theta$, which we assume has a posterior distribution given by</p>
<p>$$ p(\theta|y) \propto exp\bigg(-\frac{1}{2}|(\theta)|\bigg) $$</p>
<p>This a special case of the Laplace distribution which has a true mean of 0 and a variance of 8. </p>
<p>Though we know the true mean and variance, we are going to see how to use the independent chain Metropolis-Hastings sampler to estimate both.</p>
<p><strong>We will implement this Metropolis-Hastings sampler</strong> using the <code>metropolisHastingsIC</code> function found in the <strong>GAUSS</strong> <a href="https://github.com/aptech/gauss-sampler-library">samplerlib library</a>.</p>
<p>This function takes one required input and three optional inputs:</p>
<hr>
<dl>
<dt>keep_draws</dt>
<dd>Scalar, the total number of draws to be kept.</dd>
<dt>burn_in</dt>
<dd>Optional input, Scalar, the number of burn-in iterations. Default = 10% of kept draws.</dd>
<dt>sd_ic</dt>
<dd>Optional input, Scalar, standard deviation of Metropolis-Hastings draw. Default = 1.</dd>
<dt>theta_init</dt>
<dd>Optional input, Scalar, the initial value of $\theta$. Default = 0.
<hr></dd>
</dl>
<p><strong>We'll run the <code>metropolisHastingsIC</code></strong> using 100 burn-in iterations, and keeping 10,000 repetitions, <code>sd_ic = 6</code>, and <code>theta_init = 1</code>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">new;
library samplerlib;

// Set random seed for repeatable random numbers
rndseed 97980;

/*
** Set up Metropolis Hastings chain
** Discard r0 burnin replications
*/
burn_in = 100;

// Specify the total number of replications
keep_draws = 10000;

/*
** Standard deviation for the increment 
** in the independent chain M-H algorithm
*/
sd_ic = 6;

// Set initial theta
theta_init = 1;

// Call Metropolis-Hastings sampler
struct metropolisOut mOut;
mOut = metropolisHastingsIC(keep_draws, burn_in, sd_ic, theta_init);</code></pre>
<p><strong>The <code>metropolisHastingsIC</code> function returns</strong> a filled instance of the <code>metropolisOut</code> structure. The structure <code>mOut</code> contains the following relevant members:</p>
<hr>
<dl>
<dt>mOut.theta_draw_mh </dt>
<dd>Vector, the vector of accepted draws in the sample.</dd>
<dt>mOut.theta_mean_mh</dt>
<dd>Scalar, the Metropolis-Hastings posterior mean.</dd>
<dt>mOut.theta_std_mh</dt>
<dd>Matrix, the Metropolis-Hastings posterior standard deviation.</dd>
<dt>mOut.accepted_count_mh</dt>
<dd>Scalar, the proportion of accepted candidate draws.
<hr></dd>
</dl>
<p>The functions prints the posterior distribution mean and variance-covariance matrix as output:</p>
<pre>Posterior Mean and Variance
     0.038943707
       8.1069691

Proportion of accepted candidate draws: Independence chain M-H
      0.48455446 </pre>
<h3 id="advantages-2">Advantages</h3>
<p>The Metropolis-Hastings sampler has number of advantages:</p>
<ul>
<li>It is simple to implement. There is no need to determine conditional distributions.</li>
<li>Reasonable for sampling from correlated, high dimensional distributions.</li>
</ul>
<h3 id="disadvantages-2">Disadvantages</h3>
<ul>
<li>Can have a poor convergence rate.</li>
<li>Struggles with multi-modal distributions.</li>
<li>The sampler is sensitive to the step size between draws. Either too large or too small of a step size can have a negative impact on convergence. </li>
</ul>
<h3 id="conclusion">Conclusion</h3>
<p>Congratulations! Today we've learned about three fundamental types of Bayesian samplers, the importance sampler, the Gibbs sampler, and the Metropolis-Hastings sampler. </p>
<p>In particular, we looked at:</p>
<ol>
<li>The algorithms of each.</li>
<li>Some of the disadvantages and advantages of the samplers. </li>
<li>Examples of how to implement the samplers using the <strong>GAUSS</strong> <a href="https://github.com/aptech/gauss-sampler-library">samplerlib library</a>.</li>
</ol>
<h2 id="references">References</h2>
<ul>
<li>Our examples today are based on examples provided in the <a href="https://www.cambridge.org/us/academic/subjects/economics/econometrics-statistics-and-mathematical-economics/bayesian-econometric-methods?format=HB&amp;isbn=9780521855716">Bayesian Econometric Methods</a> textbook by <a href="https://sites.google.com/site/garykoop/">Gary Koop</a>, <a href="https://www.socsci.uci.edu/~dpoirier/research.html">Dale Poirer</a>, and <a href="https://web.ics.purdue.edu/~jltobias/">Justin Tobias</a>.</li>
</ul>
<p>Koop, G., Poirier, D. J., &amp; Tobias, J. L. (2007). <em>Bayesian econometric methods.</em> Cambridge University Press.

    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/fundamental-bayesian-samplers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Repeating simulations from older versions of GAUSS</title>
		<link>https://www.aptech.com/blog/repeating-simulations-from-older-versions-of-gauss/</link>
					<comments>https://www.aptech.com/blog/repeating-simulations-from-older-versions-of-gauss/#respond</comments>
		
		<dc:creator><![CDATA[aptech]]></dc:creator>
		<pubDate>Fri, 05 Oct 2018 20:30:33 +0000</pubDate>
				<category><![CDATA[Simulation]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=17471</guid>

					<description><![CDATA[Starting in GAUSS version 12, a new suite of high quality and high-performance random number generators was introduced. While new projects should always use one of the modern RNG's, it is sometimes necessary to exactly reproduce some work from the past. GAUSS has retained a set of older LCG's, which will allow you to reproduce the random numbers from older GAUSS versions for many distributions.]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>Starting in GAUSS version 12, a new suite of random number generators was introduced. GAUSS now contains several options of high quality and high-performance random number generators (RNG), such as:</p>
<ul>
<li>The Mersenne-Twister (MT-19937, SFMT-19937 and MT-2208).</li>
<li>Pierre L'ECuyer's MRG32K3a.</li>
<li>Niederreiter and Sobol quasi-random number generators.</li>
</ul>
<p>GAUSS version 11 and older used a linear congruential generator (LCG) by default. LCG's were fine for their time, but have been superseded by higher quality RNG's such as those mentioned above.</p>
<h2 id="how-do-i-reproduce-my-simulations">How do I reproduce my simulations?</h2>
<p>While new projects should always use one of the modern RNG's, it is sometimes necessary to exactly reproduce some work from the past. GAUSS has retained a set of LCG's, such as, <a href="https://docs.aptech.com/gauss/rndlcgam.html" target="_blank" rel="noopener">rndLCGam</a>, <a href="https://docs.aptech.com/gauss/rndlcbeta.html" target="_blank" rel="noopener">rndLCBeta</a>, <a href="https://docs.aptech.com/gauss/rndlcn.html" target="_blank" rel="noopener">rndLCn</a>, <a href="https://docs.aptech.com/gauss/rndlcu.html" target="_blank" rel="noopener">rndLCu</a> and <a href="https://docs.aptech.com/gauss/rndlci.html" target="_blank" rel="noopener">rndLCi</a>. These functions will allow you to reproduce the random numbers from older GAUSS versions for many distributions.</p>
<p>However, while <code>rndLCn</code> does compute normally distributed random numbers, it will not match those from GAUSS version 10 and before. In GAUSS version 11 an improvement to the method for transforming the uniform random numbers created by the underlying LCG was implemented in <a href="https://docs.aptech.com/gauss/rndn.html" target="_blank" rel="noopener">rndn</a>.</p>
<h2 id="how-do-i-reproduce-rndn-from-gauss-10-and-older">How do I reproduce rndn from GAUSS 10 and older?</h2>
<p>In order to allow GAUSS users to reproduce the <code>rndn</code> output from older versions of GAUSS, a new function was added to GAUSS version 18. The new function is <code>_rndng10</code>.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">rndseed 777;
x =  _rndng10(5,1);
print x;</code></pre>
<pre>      0.80449208
      0.83614293
     -0.32917873
     -0.47774279
      0.22364814 </pre>
<p>As you can see it works in exactly the same manner as <code>rndn</code> did.</p>
<h2 id="dont-just-replace-rndn-with-_rndng10">Don't just replace rndn with _rndng10</h2>
<p>While some of you may be tempted to find-and-replace <code>rndn</code> with <code>_rndng10</code>, you should not do this for a couple of reasons.</p>
<ol>
<li>
<p><code>_rndng10</code> is available to repeat simulations run in the past. It is still inferior to the RNG used by the modern <code>rndn</code>.</p>
</li>
<li>The code will be less portable. It will not work in a version of GAUSS older than GAUSS 18.</li>
</ol>
<h2 id="swap-random-number-generators-with-a-define">Swap random number generators with a define</h2>
<p>You can think of #define as an instruction to GAUSS to replace all instances of one thing with another during the compile phase before the program is run. This will not change the code in the file, just how GAUSS interprets it.</p>
<div class="alert alert-info" role="alert">NOTE: #define only has effect during a program run. Therefore, it can be used only inside a program file, not from the command line or 'run selected text'.</div>
<p>In our case, we want to replace all instances of <code>rndn</code> with <code>_rndng10</code>. We can do that like this:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">#define rndn _rndng10

rndseed 777;
x = rndn(5,1);
print x;</code></pre>
<pre>      0.80449208
      0.83614293
     -0.32917873
     -0.47774279
      0.22364814</pre>
<h2 id="make-it-portable-with-ifminkernelversion">Make it portable with #ifminkernelversion</h2>
<p>The <code>#define</code> is a quick and convenient way to swap out <code>rndn</code> for <code>_rndng10</code>. However, it is still not portable. It will not run on any version of GAUSS that does not have the <code>rndng10</code> function (version 17 and older).</p>
<p>While you could just comment it out <code>#ifminkernelversion</code> is a better option. <code>#ifminkernelversion</code> is another preprocessor (i.e. before run-time) command. It tells GAUSS to include specific code ONLY if using a certain version of GAUSS or newer.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// If we are using GAUSS 18 or newer
// replace rndn with _rndng10
#ifminkernelversion(18)
    #define rndn _rndng10
#endif

rndseed 777;
x = rndn(5,1);
print x;</code></pre>
<pre>      0.80449208
      0.83614293
     -0.32917873
     -0.47774279
      0.22364814</pre>
<h3 id="conclusion">Conclusion</h3>
<p>In this post, we have learned that:</p>
<ul>
<li>The GAUSS random number generators were upgraded in versions 11 and 12.</li>
<li>How to swap between the older and newer versions of <code>rndn</code> in a convenient and portable manner.</li>
</ul>
<p>Code and data from this blog can be found <a href="https://github.com/aptech/gauss_blog/tree/master/simulation/repeating-older-sims-10.05.18">here</a>.</p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/make-your-code-portable-data-paths/" target="_blank" rel="noopener">Make your code portable: Data paths</a></li>
<li><a href="https://www.aptech.com/blog/the-current-working-directory-what-you-need-to-know/" target="_blank" rel="noopener">The Current Working Directory: What you need to know</a></li>
<li><a href="https://www.aptech.com/blog/the-basics-of-optional-arguments-in-gauss-procedures/" target="_blank" rel="noopener">The Basics of Optional Arguments in GAUSS Procedures</a></li>
<li><a href="https://www.aptech.com/blog/understanding-errors-g0025-undefined-symbol/" target="_blank" rel="noopener">Understanding Errors | G0025 : Undefined symbol</a></li>
<li><a href="https://www.aptech.com/blog/understanding-errors-g0064-operand-missing/" target="_blank" rel="noopener">Understanding Errors: G0064 Operand Missing</a></li>
<li><a href="https://www.aptech.com/blog/understanding-errors-g0058-index-out-of-range/" target="_blank" rel="noopener">Understanding Errors: G0058 Index out-of-Range</a></li>
</ol>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/repeating-simulations-from-older-versions-of-gauss/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
