We propose EVOlutionary Selector (EVOS), an efficient training paradigm for accelerating Implicit Neural Representation (INR). Unlike conventional INR training that feeds all samples through the neural network in each iteration, our approach restricts training to strategically selected points, reducing computational overhead by eliminating redundant forward passes. Specifically, we treat each sample as an individual in an evolutionary process, where only those fittest ones survive and merit inclusion in training, adaptively evolving with the neural network dynamics. While this is conceptually similar to Evolutionary Algorithms, their distinct objectives (selection for acceleration vs. iterative solution optimization) require a fundamental redefinition of evolutionary mechanisms for our context. In response, we design sparse fitness evaluation, frequency-guided crossover, and augmented unbiased mutation to comprise EVOS. These components respectively guide sample selection with reduced computational cost, enhance performance through frequency-domain balance, and mitigate selection bias from cached evaluation. Extensive experiments demonstrate that our method achieves approximately 48%-66% reduction in training time while ensuring superior convergence without additional cost, establishing state-of-the-art acceleration among recent sampling-based strategies.
Sample Weighting ALSO Benefits “Overfitting”. An intuitive assumption suggests that sparsifying training samples would compromise per-iteration reconstruction quality while reducing computational cost, due to incomplete data utilization. However, the results of EVOS (w/o CFS) in our experiment challenge this intuition, demonstrating that our strategy achieves superior fitting quality compared to full-data training under identical iteration counts. This phenomenon can be understood through the lens of sample weighting, a technique that improves model generality by adjusting sample observation frequencies during training. Indeed, EVOS can be viewed as a specialized form of sample weighting that reweights signal coordinates during signal fitting, implicitly regularizing the loss function through selective sampling. Notably, this finding extends the benefits of sample weighting beyond traditional model generalization to signal fitting tasks (inherently an overfitting scenario without test set validation), revealing an intriguing contradiction that merits further investigation.
@article{evos-inr,
title={EVOS: Efficient Implicit Neural Training via EVOlutionary Selector},
author={Zhang, Weixiang and Xie, Shuzhao and Ren, Chengwei and Xie, Siyi and Tang, Chen and Ge, Shijia and Wang, Mingzi and Wang, Zhi},
journal={arXiv preprint arXiv:2412.10153},
year={2024}
}