Analysis of the impact of parameters in TextGCN
DOI:
https://doi.org/10.14210/cotb.v12.p014-019Resumo
Deep learning models uses many parameters to work properly. As
they become more complex, the authors of these novel models cannot
explore in their papers the variation of each parameter of their
model. Therefore, this work describes an analysis of the impact of
four different parameters (Early Stopping, Learning Rate, Dropout,
and Hidden 1) in the TextGCN Model. This evaluation used four
datasets considered in the original TextGCN publication, obtaining
as a side-effect small improvements in the results of three of them.
The most relevant conclusion is that these parameters influence the
convergence and accuracy, although they individually do not constitute
strong support when aiming to improve the model’s results
reported as the state-of-the-art.