The Math Behind the Machine

Posted on Tuesday, September 28th, 2021

Image of a man in a suit--only visible from the torso--pointing at a graph that overlays the image.
Financial forecasting requires a great deal of human effort and is costly. Machine learning can more efficiently explore vast amounts of data to make predictions.

Refining the mathematical models behind machine learning helps businesses predict future growth.

Fortune tellers are not the only people who need to predict the future. Government policy makers and corporate managers can plan more successfully if they can predict the percent change over time—the growth rate—of factors such as revenues. But government policymakers and corporate managers do not have crystal balls. That is where machine learning comes in.

Using mathematical algorithms, computers can learn from existing data and use what they learn to generate predictions that inform strategy. Mathematics professor Dr. Herb Kunze, his Ph.D. student Bryson Boreland and colleagues have incorporated new considerations into existing machine-learning based algorithms with applications in growth forecasting, improving on their performance. The work fits in the domains of machine learning and multi-criteria decision making or optimization.

Training machines

During the training stage of machine learning, researchers run algorithms on known, “labelled” data to produce a mathematical model, a process called “model fitting,” which Turing Prize winner Judea Pearl humorously referred to as “glorified curve fitting.” The typical training paradigm involves splitting a known dataset into two parts, the “training set” used for model fitting, and the “test set” used to assess the accuracy of the model. The goal is to use the resulting model to make predictions when data are not known.

More criteria improve accuracy

The researchers incorporated several criteria into a variety of models. This includes the data-fitting error, which measures accuracy; entropy (the amount of information carried within the parameters of the model), and sparsity (the complexity of the model and its solution). These are competing criteria: they are each best satisfied by conflicting decisions. The researchers experimentally validated improvements in model quality by using computational explorations for two different applications, using an assortment of models: handwritten digit recognition and financial data forecasting. In one case, they considered real data for the United States Gross Domestic Product (GDP), training the model to forecast based on the previous data points. Their model effectively balanced the three different competing criteria and model accuracy improved by adding a small amount of sparsity or entropy.

“This competing multi-criteria approach helps improve machine learning and forecasting, which can provide a ‘competitive edge,’ at least in theory,” says Kunze. “There are many other interesting applications and underlying neural network models to consider. Given the nature of the criteria, the work is also very interesting from a strictly mathematical viewpoint. Bryson Boreland’s doctoral work involves a deep study of the matter.”

Head shot of Dr. Herb Kunze

Herb Kunze is a Professor of Mathematics at the University of Guelph.

This work was supported by an NSERC Discovery Grant.

Boreland B, Kunze H, La Torre D, Liuzzi D. A Generalized Multiple Criteria Data-Fitting Model With Sparsity and Entropy With Application to Growth Forecasting. IEEE Transactions on Engineering Management. 2021 Jun 10. doi: 10.1109/TEM.2021.3078831
 

News Archive