EVOS: Efficient Implicit Neural Training via EVOlutionary Selector

Weixiang Zhang1, Shuzhao Xie1, Chengwei Ren1, Siyi Xie2, Chen Tang3, Shijia Ge1, Mingzi Wang1, Zhi Wang1*
1Tsinghua University ; 2Xi’an Jiaotong University ; 3The Chinese University of Hong Kong
CVPR 2025

*Indicates Corresponding Author
MY ALT TEXT

Overview of EVOS framework. The proposed method aims to optimize a MLP for implicit signal representation via evolutionary selector. The process comprises three key components: (1) (Sparse) Fitness Evaluation for efficiently guiding coordinate selection, (2) (Frequency-Guided) Crossover for improving performance by balancing frequency domain preferences, and (3) (Augmented Unbiased) Mutation for mitigating selection bias in each iteration. The selected coordinates from this evolutionary process are then fed into the network, enabling sparsified forward passes and reduced computational costs.

📌 Abstract

We propose EVOlutionary Selector (EVOS), an efficient training paradigm for accelerating Implicit Neural Representation (INR). Unlike conventional INR training that feeds all samples through the neural network in each iteration, our approach restricts training to strategically selected points, reducing computational overhead by eliminating redundant forward passes. Specifically, we treat each sample as an individual in an evolutionary process, where only those fittest ones survive and merit inclusion in training, adaptively evolving with the neural network dynamics. While this is conceptually similar to Evolutionary Algorithms, their distinct objectives (selection for acceleration vs. iterative solution optimization) require a fundamental redefinition of evolutionary mechanisms for our context. In response, we design sparse fitness evaluation, frequency-guided crossover, and augmented unbiased mutation to comprise EVOS. These components respectively guide sample selection with reduced computational cost, enhance performance through frequency-domain balance, and mitigate selection bias from cached evaluation. Extensive experiments demonstrate that our method achieves approximately 48%-66% reduction in training time while ensuring superior convergence without additional cost, establishing state-of-the-art acceleration among recent sampling-based strategies.

💡 Insights

Sample Weighting ALSO Benefits “Overfitting”. An intuitive assumption suggests that sparsifying training samples would compromise per-iteration reconstruction quality while reducing computational cost, due to incomplete data utilization. However, the results of EVOS (w/o CFS) in our experiment challenge this intuition, demonstrating that our strategy achieves superior fitting quality compared to full-data training under identical iteration counts. This phenomenon can be understood through the lens of sample weighting, a technique that improves model generality by adjusting sample observation frequencies during training. Indeed, EVOS can be viewed as a specialized form of sample weighting that reweights signal coordinates during signal fitting, implicitly regularizing the loss function through selective sampling. Notably, this finding extends the benefits of sample weighting beyond traditional model generalization to signal fitting tasks (inherently an overfitting scenario without test set validation), revealing an intriguing contradiction that merits further investigation.

BibTeX


@article{evos-inr,
  title={EVOS: Efficient Implicit Neural Training via EVOlutionary Selector},
  author={Zhang, Weixiang and Xie, Shuzhao and Ren, Chengwei and Xie, Siyi and Tang, Chen and Ge, Shijia and Wang, Mingzi and Wang, Zhi},
  journal={arXiv preprint arXiv:2412.10153},
  year={2024}
}