TY - GEN AU - Egele R AU - Mohr F AU - Viering T AU - Balaprakash P. PY - 2024 SN - 9252312 UR - http://hdl.handle.net/10818/61892 AB - To reach high performance with deep learning, hyperparameter optimization (HPO) is essential. This process is usually time-consuming due to costly evaluations of neural networks. Early discarding techniques limit the resources granted to unpromising... LA - eng PB - Neurocomputing KW - Deep neural network KW - Hyperparameter optimization KW - Learning curve KW - Multi-fidelity optimization TI - The unreasonable effectiveness of early discarding after one epoch in neural network hyperparameter optimization DO - 10.1016/j.neucom.2024.127964 ER -