Show simple item record

dc.contributor.authorEgele R
dc.contributor.authorMohr F
dc.contributor.authorViering T
dc.contributor.authorBalaprakash P.
dc.date.accessioned2024-10-07T21:39:20Z
dc.date.available2024-10-07T21:39:20Z
dc.date.issued2024
dc.identifier.issn9252312
dc.identifier.otherhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85195362246&doi=10.1016%2fj.neucom.2024.127964&partnerID=40&md5=f737ef41f4eb857671228f758114f6bf
dc.identifier.urihttp://hdl.handle.net/10818/61892
dc.description.abstractTo reach high performance with deep learning, hyperparameter optimization (HPO) is essential. This process is usually time-consuming due to costly evaluations of neural networks. Early discarding techniques limit the resources granted to unpromising candidates by observing the empirical learning curves and canceling neural network training as soon as the lack of competitiveness of a candidate becomes evident. Despite two decades of research, little is understood about the trade-off between the aggressiveness of discarding and the loss of predictive performance. Our paper studies this trade-off for several commonly used discarding techniques such as successive halving and learning curve extrapolation. Our surprising finding is that these commonly used techniques offer minimal to no added value compared to the simple strategy of discarding after a constant number of epochs of training. The chosen number of epochs mostly depends on the available compute budget. We call this approach i-Epoch (i being the constant number of epochs with which neural networks are trained) and suggest to assess the quality of early discarding techniques by comparing how their Pareto-Front (in consumed training epochs and predictive performance) complement the Pareto-Front of i-Epoch. © 2024 The Author(s)en
dc.formatapplication/pdfes_CO
dc.language.isoenges_CO
dc.publisherNeurocomputinges_CO
dc.relation.ispartofseriesNeurocomputing Vol. 597 N° art. 127964
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.sourceUniversidad de La Sabanaes_CO
dc.sourceIntellectum Repositorio Universidad de La Sabanaes_CO
dc.subject.otherDeep neural networken
dc.subject.otherHyperparameter optimizationen
dc.subject.otherLearning curveen
dc.subject.otherMulti-fidelity optimizationen
dc.titleThe unreasonable effectiveness of early discarding after one epoch in neural network hyperparameter optimizationen
dc.typejournal articlees_CO
dc.type.hasVersionpublishedVersiones_CO
dc.rights.accessRightsopenAccesses_CO
dc.identifier.doi10.1016/j.neucom.2024.127964


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 InternationalExcept where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International