Journal Articles (All Issues)

A STUDY OF THE PREDICTION OF THE THREE MODELS LOGISTIC REGRESSION, GAUSSIAN NAÏVE BAYES, AND MUTI-LAYER PERCEPTRON CLASSIFIER IN MACHINE LEARNING

Authors

Namkil Kang

Keyword machine learning, model, Logistic Regression, Gaussian NB, Multi-Layer Perceptron Classifier, accuracy

Abstract

The ultimate goal of this paper is to analyze the accuracy of three models in machine learning. More specifically, we made the three models Logistic Regression, Gaussian Naïve Bayes, and Multi-Layer Perceptron Classifier predict whether each person had a cold or not. This research was carried out by python. A point to note is that we trained the model Logistic Regression to predict whether each person had a cold or not. When we used grid search, the best parameter was 0.1 and the accuracy rate of the model Logistic Regression was 96%. When it comes to test data, its accuracy was 100%. This in turn suggests that this took place since we trained the model Logistic Regression. In the case of random search, the best parameter was 6 and the accuracy rate of the model Logistic Regression was 93.33%. More importantly, when in the case of test data, the parameter was 6, the best score was 100%. This took place since we trained the model Logistic Regression. A further point to note is that we trained the model Gaussian Naive Bayes to predict our data. Most importantly, when we used grid search and random search, the accuracy rate of the model Gaussian Naive Bayes was 100%. This in turn indicates that this model worked well for 100 sets of data. A major point of this paper is that we trained the model Multi-Layer Perceptron Classifier to predict whether each person had a cold or not. When we used grid search, the best parameter was 50 and the best score was 96%. However, in the case of test data, the accuracy rate of the model Multi-Layer Perceptron Classifier was 100%. It is clear from our findings that the three models worked well for our data, but the model Gaussian Naive Bayes worked best for them.

References

    [1] Kang, N. (2023a). K-Pop in BBC News: A Big Data Analysis. Advances in Social Sciences Research Journal 10(2), 156-169. [2] Kang, N. (2023b). K-Dramas in Google: A NetMiner Analysis. Transaction on Engineering and Computing Sciences 11(1), 193-216. [3] Kang, N. (2023c). A Comparative Analysis of Tolerate and Put up with in the COCA. Semiconductor and optoelectronics 42(1): 1468-1476. [4] Kang, N. (2023d). Sure of and Sure about in Corpora and ChatGPT. Journal of Harbin Engineering University 44(7): 1347-1351. [5] Kang, N. (2023e). Turn out adj and Turn out to be adj in the Now Corpus and ChatGPT. Journal of Harbin Engineering University 44(8): 825-831. [6] Kang, N. (2023f). Care for and Like in Corpora and ChatGPT. Semiconductor and optoelectronics 42(2): 188-198. [7] Kang, N. (2024a). A Big Data Analysis of a Hot Political Issue. Studies in Linguistics 70: 149-165 [8] Kang, N. (2024b). A study of the Prediction of the Model Logistic Regression in Machine Learning : Focusing on a Survey.

Downloads

View/Download PDF

PDF



Published

2024-02-21

Issue

Vol. 43 No. 01 (2024)