Author

Lelaina Rose

Date of Award

2025

Document Type

Thesis

Degree Name

Bachelors

Department

Natural Sciences

First Advisor

Hulden, Mans

Area of Concentration

Computer Science with Statistics

Abstract

In this thesis I examine a specific ensembling technique—which I call “additive ensembling”—and how it performs when applied to linear classifiers, the perceptron, in particular, and feedforward neural networks. This technique differs from previous methods of ensembling, as it does not require the storage of individual weight vectors. Instead, it iteratively sums weight vectors from multiple separate models, resulting in one final summed model containing just one weight vector per class. It is implemented as an extension of scikit-learn’s machine learning library’s model classes. This algorithm is very straightforward to implement, requiring only a few lines of code and is much more efficient than traditional ensembling. The performance was evaluated on eight different standard datasets and the results show that it delivers superior performance with all linear classifiers, such as the perceptron and logistic regression. However, for feed-forward neural networks this type of ensembling fails completely, and yields essentially a random classification. I conjecture that this is because linear classifiers have convex parameter landscapes, or loss function surfaces. Because of this, the individual models find points that are all close to the minimum in this landscape, directing the model average towards a global minimum. The same is not true for feed-forward neural networks, however. Because their objective functions are not convex and can have multiple minima, the summation leads to a random point in the parameter space rather than an actual minimum.

Share

COinS