Category: Machine learning

Classification with Regularized Logistic Regression

Logistic regression has been a long-standing popular tool for modeling categorical outcomes. It’s widely used across fields like epidemiology, finance, and econometrics. In today’s blog we’ll look at the fundamentals of logistic regression. We’ll use a real-world survey data application and provide a step-by-step guide to implementing your own regularized logistic regression models using the GAUSS Machine Learning library, including:
  1. Data preparation.
  2. Model fitting.
  3. Classification predictions.
  4. Evaluating predictions and model fit.

Machine Learning With Real-World Data

If you’ve ever done empirical work, you know that real-world data rarely, if ever, arrives clean and ready for modeling. No data analysis project consists solely of fitting a model and making predictions. In today’s blog, we walk through a machine learning project from start to finish. We’ll give you a foundation for completing your own machine learning project in GAUSS, working through:
  • Data Exploration and cleaning.
  • Splitting data for training and testing.
  • Model fitting and prediction.

Understanding Cross-Validation

If you’ve explored machine learning models, you’ve most likely encountered the term “cross-validation” at some point. Cross-validation is an important step for training robust and reliable maachine learning models. In this blog, we’ll break cross-validation into simple terms. Using a practical demonstration, we’ll equip you with the knowledge to confidently use cross-validation in your machine learning projects.

Fundamentals of Tuning Machine Learning Hyperparameters

Machine learning algorithms often rely on hyperparameters that can impact the performance of the models. These hyperparameters are external to the data and are part of the modeling choices that practitioners must make. An important step in machine learning modeling is optimizing model hyperparameters to improve prediction accuracy. In today’s blog, we will cover some fundamentals of parameter tuning and will look more specifically at fine-tuning our previous decision forest model.

Applications of Principal Components Analysis in Finance

Principal components analysis (PCA) is a useful tool that can help practitioners streamline data without losing information. In today’s blog, we’ll examine the use of principal components analysis in finance using an empirical example. We’ll look more closely at:
  • What PCA is.
  • How PCA works.
  • How to use the GAUSS Machine Learning library to perform PCA.
  • How to interpret PCA results.

Predicting Recessions with Machine Learning Techniques

Forecasts have become a valuable commodity in today’s data-driven world. Unfortunately, not all forecasting models are of equal caliber, and incorrect predictions can lead to costly decisions. Today we will compare the performance of several prediction models used to predict recessions. In particular, we’ll look at how a traditional baseline econometric model compares to machine learning models. Our models will include:
  • A baseline probit model.
  • K-nearest neighbors.
  • Decision forests.
  • Ridge classification.

Have a Specific Question?

Get a real answer from a real person

Need Support?

Get help from our friendly experts.

Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy