<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>programming &#8211; Aptech</title>
	<atom:link href="https://www.aptech.com/blog/tag/programming/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aptech.com</link>
	<description>GAUSS Software - Fastest Platform for Data Analytics</description>
	<lastBuildDate>Thu, 20 Mar 2025 21:17:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Maximum Likelihood Estimation in GAUSS</title>
		<link>https://www.aptech.com/blog/maximum-likelihood-estimation-in-gauss/</link>
					<comments>https://www.aptech.com/blog/maximum-likelihood-estimation-in-gauss/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 16 Oct 2020 04:59:29 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[maximum likelihood]]></category>
		<category><![CDATA[programming]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11580039</guid>

					<description><![CDATA[Maximum likelihood is a fundamental workhorse for estimating model parameters with applications ranging from simple linear regression to advanced discrete choice models. Today we learn how to perform maximum likelihood estimation with the GAUSS Maximum Likelihood MT library using our simple linear regression example. 

We'll show all the fundamentals you need to get started with maximum likelihood estimation in GAUSS including:
<ul>
<li> How to create a likelihood function.</li>
<li> How to call the <code>maxlikmt</code> procedure to estimate parameters.</li> 
<li> How to interpret the results from <code>maxlikmt</code>.</li>
<ul>
]]></description>
										<content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>Maximum likelihood is a fundamental workhorse for estimating model parameters with applications ranging from simple linear regression to advanced discrete choice models. Today we examine how to implement this technique in GAUSS using the <a href="https://store.aptech.com/gauss-applications-category/maximum-likelihood-mt.html" target="_blank" rel="noopener">Maximum Likelihood MT library</a>. </p>
<p>Maximum Likelihood MT provides a number of useful, pre-built tools that we will demonstrate today using a simple linear model. We'll show all the fundamentals you need to get started with maximum likelihood estimation in GAUSS including:</p>
<ul>
<li>How to create a likelihood function.</li>
<li>How to call the <code>maxlikmt</code> procedure to estimate parameters. </li>
<li>How to interpret the results from <code>maxlikmt</code>.</li>
</ul>
<div class="alert alert-info" role="alert">Note: This builds on our <a href="https://www.aptech.com/blog/beginners-guide-to-maximum-likelihood-estimation-in-gauss/" target="_blank" rel="noopener">&quot;Beginner's Guide to Maximum Likelihood Estimation&quot;</a>.  </div>
<h2 id="maximum-likelihood-estimation-in-gauss">Maximum Likelihood Estimation in GAUSS</h2>
<p>The Maximum Likelihood MT library provides a full suite of tools for easily and efficiently tackling maximum likelihood estimation. </p>
<p>Today we will use the <code>maxlikmt</code> procedure to estimate two unknown parameters, $\hat{\beta}$ and  $\hat{\sigma^2}$. The <code>maxlikmt</code> procedure requires three inputs, a pointer to a likelihood function, a vector of starting parameter values, and the response data. </p>
<p>It also accepts any <a href="https://www.aptech.com/blog/the-basics-of-optional-arguments-in-gauss-procedures/" target="_blank" rel="noopener">optional inputs</a> needed for the likelihood function and an optional <a href="https://www.aptech.com/resources/tutorials/a-gentle-introduction-to-using-structures/" target="_blank" rel="noopener">control structure</a> for fine-tuning optimization.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">out = maxlikmt(&amp;lfn, par_start, y [,..., ctl]);</code></pre>
<hr>
<dl>
<dt>&amp;lfn</dt>
<dd>A pointer to a procedure that returns either the scalar log-likelihood, a vector of log-likelihoods, or the weighted log-likelihood.</dd>
<dt>par_start</dt>
<dd>Vector, starting parameter values.</dd>
<dt>y</dt>
<dd>Vector, the response data.</dd>
<dt>...</dt>
<dd>Optional, additional inputs required for the likelihood function. These are passed directly in the order and form provided to the likelihood function.</dd>
<dt>ctl</dt>
<dd>Structure, an instance of the <code>maxlikmtControl</code> structure used to control features of optimization including algorithm, bounds, etc.</dd>
</dl>
<hr>
<div class="alert alert-info" role="alert">You can create a pointer to a procedure by prepending the name of the procedure with an ampersand (&amp;). </div>
<h2 id="maximum-likelihood-estimation-linear-model-example">Maximum Likelihood Estimation Linear Model Example</h2>
<p>Let's start with a simple linear regression example. </p>
<p>In linear regression, we assume that the model residuals are identical and independently normally distributed:</p>
<p>$$\epsilon = y - \hat{\beta}x \sim N(0, \sigma^2)$$</p>
<p>Based on this assumption, the log-likelihood function for the unknown parameter vector, $\theta = \{\beta, \sigma^2\}$, conditional on the observed data, $y$ and $x$ is given by:</p>
<p>$$L(\theta|y, x) = \frac{1}{2}\sum_{i=1}^n \Big[ \ln \sigma^2 + \ln (2\pi) + \frac{(y_i-\hat{\beta}x_i)^2}{\sigma^2} \Big] $$</p>
<p>For today's example, we will simulate 800 observations of linear data using randomly generated $x$ values and true parameter values of $\beta = 1.2$ and $\sigma^2 = 4$. </p>
<p>The <a href="https://github.com/aptech/gauss_blog/tree/master/econometrics/maximum-likelihood-estimation-9.20.20" target="_blank" rel="noopener">code to generate our dataset can be found here</a>.</p>
<h3 id="the-linear-model-likelihood-procedure">The Linear Model Likelihood Procedure</h3>
<p>Our first step is to create the procedure to compute the log-likelihood.</p>
<p>The log-likelihood procedure will be called by <code>maxlikmt</code>, so we have to set it up in the way that <code>maxlikmt</code> expects. The inputs to the log-likelihood procedure are:</p>
<hr>
<dl>
<dt>parms</dt>
<dd>Vector, the current parameter vector.</dd>
<dt>y</dt>
<dd>Vector, the response data.</dd>
<dt>...</dt>
<dd>Optional, additional inputs required for the likelihood function.</dd>
<dt>ind</dt>
<dd>3x1 vector created by <code>maxlikmt</code>, specifying whether the gradient or Hessian should be computed. Your procedure can ignore this input if you choose to have <code>maxlikmt</code> compute a numerical gradient and / or Hessian.</dd>
</dl>
<hr>
<p>The log-likelihood procedure must return a <code>modelResults</code> structure with the <code>function</code> member containing the value of log-likelihood at the current parameter values. </p>
<div class="alert alert-info" role="alert">The sum is omitted intentionally to compute a vector of log-likelihoods by observation.</div>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">proc (1) = lfn(theta, y, x, ind);
    local beta_est, sigma2;

    // Extract parameters
    beta_est = theta[1];
    sigma2 = theta[2];

    // Declare the modelResults structure
    struct modelResults mm;

    // Set the modelResults structure member, 'function',
    // equal to the log-likelihood function value
    mm.function = -1/2 * (ln(sigma2) + ln(2*pi) + (y - x*beta_est)^2 / sigma2);

    retp(mm);
endp;</code></pre>
<p>After looking at the above procedure, you are probably wondering:</p>
<ol>
<li>Why do we need to return the function value in a <code>modelResults</code> structure?</li>
<li>What is <code>ind</code>?</li>
</ol>
<p>Hold those questions for now. We will answer them in our second example when we add an analytical gradient.</p>
<h3 id="calling-the-maxlikmt-procedure">Calling the <code>maxlikmt</code> procedure</h3>
<p>Once the log-likelihood procedure has been created, we are ready to call <code>maxlikmt</code>. We first specify our starting values:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Starting values
theta_start = { 0.5, 0.5 };</code></pre>
<p>Finally, we call <code>maxlikmt</code> and print the results using <code>maxlikmtprt</code>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Perform estimation and print report
call maxlikmtprt(maxlikmt(&amp;lfn, theta_start, y, x));</code></pre>
<div class="alert alert-info" role="alert">Note that the data is passed in the same order to <code>maxlikmt</code> as to the likelihood function.</div>
<h3 id="maximum-likelihood-estimate-results">Maximum Likelihood Estimate Results</h3>
<p><a href="https://www.aptech.com/wp-content/uploads/2020/09/linear-regression-mle.jpeg"><img src="https://www.aptech.com/wp-content/uploads/2020/09/linear-regression-mle.jpeg" alt="" width="1200" height="600" class="aligncenter size-full wp-image-11580023" /></a></p>
<p>GAUSS prints the following results to the input/output window:</p>
<pre>return code =    0
normal convergence

Log-likelihood    -1686.04
Number of cases     800

Covariance of the parameters computed by the following method:
ML covariance matrix

Parameters    Estimates     Std. err   Est./se     Prob   Gradient
------------------------------------------------------------------
x[1,1]           1.1486       0.0710    16.177   0.0000     0.0000
x[2,1]           3.9638       0.1982    20.001   0.0000     0.0028</pre>
<p>The first thing to note is that GAUSS tells us that the optimization converges normally. If the optimization failed to converge GAUSS would report this along with the reason for failure.</p>
<p>The GAUSS maximum likelihood estimates are $\hat{\beta}_{MLE} = 1.1486$ and $\hat{\sigma^2} = 3.9638$.</p>
<p>In addition to the parameter estimates, GAUSS provides confidence intervals around the estimates:</p>
<pre>Wald Confidence Limits

                              0.95 confidence limits
Parameters    Estimates     Lower Limit   Upper Limit   Gradient
----------------------------------------------------------------
x[1,1]           1.1486          1.0093        1.2880     0.0000
x[2,1]           3.9638          3.5747        4.3528     0.0028</pre>
<h3 id="storing-estimation-results-in-the-maxlikmtresults-output-structure">Storing Estimation Results in the <code>maxlikmtResults</code> Output Structure</h3>
<p>Our first example used the <code>call</code> keyword to discard the return from <code>maxlikmt</code>. You can return the estimation results in a <code>maxlikmtResults</code> structure.</p>
<p>Some of the most useful elements in the <code>maxlikmtResults</code> structure include:</p>
<hr>
<dl>
<dt>m_out.par.obj.m</dt>
<dd>Vector, estimated parameter values. These are in the same order as the parameters were passed in the parameter starting values input.</dd>
<dt>m_out.returnDescription</dt>
<dd>String, describes if convergence normally or if there are errors.</dd>
<dt>m_out.covPar</dt>
<dd>Matrix, covariance of the paramter estimates. This is computed as the Moore-Penrose inverse of the Hessian at the final parameter estimates.
<hr></dd>
</dl>
<p>Here is how to modify our earlier code to save the results.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Declare 'm_out' to be a maxlikmtResults structure
struct maxlikmtResults m_out;

// Perform estimation and store results in 'm_out'
m_out = maxlikmt(&amp;lfn, theta_start, y, x);

// Print the parameter estimates
print m_out.par.obj.m;

// Print the return description
print m_out.returnDescription;</code></pre>
<div class="alert alert-info" role="alert">More information about how to view stored results can be found <a href="https://www.aptech.com/resources/tutorials/introduction-to-gauss-viewing-data-in-gauss/">here</a>.</div>
<h2 id="specifying-the-analytical-gradient">Specifying the Analytical Gradient</h2>
<p>For our linear model, it is quite feasible to derive the analytical first derivates:</p>
<p>$$\frac{\partial L(\theta||y, x)}{\partial \beta} = \frac{1}{\sigma^2}\Big[x*(y - \beta x)\Big]$$
$$\frac{\partial L(\theta||y, x)}{\partial \sigma^2} = -\frac{1}{2}\Big[\frac{1}{\sigma} - \frac{(y - \beta x)^2}{\sigma^2}\Big]$$</p>
<p>We can specify to GAUSS to use these analytical first derivatives by computing them in our likelihood procedure and assigning the results to the <code>gradient</code> member of the <code>modelResults</code> structure as shown below:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Write likelihood function
// with analytical derivatives

proc (1) = lfn(theta, y, x, ind);
    local beta_est, sigma2, g1, g2;

    beta_est = theta[1];
    sigma2 = theta[2];

    struct modelResults mm;
    // Specify likelihood function
    if ind[1];
        mm.function = -1/2*(ln(sigma2) + ln(2*pi) + (y - x*beta_est)^2 / sigma2);
    endif;

    // Include gradients
    if ind[2];

       g1 = 1/sigma2 * ((y - x*beta_est) .*x);
       g2 = -1/2 * ((1/sigma2) - (y - x*beta_est)^2 / sigma2^2); 

       // Concatenate into a (n observations)x2 matrix. 
       mm.gradient = g1 ~ g2;
    endif;

    retp(mm);
endp;  </code></pre>
<h2 id="what-is-ind">What is <code>ind</code>?</h2>
<p><code>ind</code> is a 3x1 vector created by <code>maxlikmt</code> which tells your likelihood procedure whether it needs a gradient or Hessian calculation in addition to the function evaluation. As we can see in the above procedure, if the second element of <code>ind</code> is nonzero, <code>maxlikmt</code> is asking for a gradient calculation.</p>
<p>Note that:</p>
<ol>
<li>You do not have to use or check for <code>ind</code> in your likelihood procedure if you are using numerical derivatives.</li>
<li><code>maxlikmt</code> creates <code>ind</code> internally and passes it to the likelihood procedure. You do not have to declare or create it.</li>
</ol>
<h3 id="how-ind-can-speed-up-your-modeling">How <code>ind</code> can speed up your modeling</h3>
<p>The advantage of using the <code>ind</code> input rather than a separate gradient or Hessian procedure is that often times many calculations needed by the function evaluation are also needed by the gradient. Combining them all into one procedure gives an opportunity to only compute the operations once.</p>
<p>For example, if we needed to speed up our model, we could modify our likelihood procedure to compute the residuals just once towards the top of the code and use the newly created <code>residuals</code> and <code>residuals2</code> variables in the function and gradient calculations.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">proc (1) = lfn(theta, y, x, ind);
    local beta_est, sigma2, g1, g2, residuals, residuals2;

    beta_est = theta[1];
    sigma2 = theta[2];

    // Operations common to likelihood and gradient
    residuals = y - x*beta_est;
    residuals2 = residuals^2;

    struct modelResults mm;
    // Specify likelihood function
    if ind[1];
        mm.function = -1/2*(ln(sigma2) + ln(2*pi) + residuals2 / sigma2);
    endif;

    // Include gradients
    if ind[2];

       g1 = 1/sigma2 * (residuals .* x);
       g2 = -1/2 * ((1/sigma2) - (residuals2 / sigma2^2); 

       // Concatenate into a 1x2 row vector. 
       mm.gradient = g1 ~ g2;
    endif;

    retp(mm);
endp;</code></pre>
<h2 id="conclusions">Conclusions</h2>
<p>Congratulations! After today's blog, you should have a better understanding of how to implement maximum likelihood in GAUSS. </p>
<p>We've covered the components of the <code>maxlikmt</code> procedure including:</p>
<ul>
<li>The <code>maxlikmt</code> inputs.</li>
<li>The log-likelihood procedure and the <code>modelResults</code> structure.</li>
<li>The <code>maxlikmtResults</code> output structure.</li>
<li>Specifying analytical gradients in the log-likelihood function.</li>
</ul>
<p>    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>
<h2 id="try-out-the-gauss-maximum-likelihood-mt-library">Try Out the GAUSS Maximum Likelihood MT Library</h2>

[contact-form-7]
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/maximum-likelihood-estimation-in-gauss/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
