Residual Networks (ResNet) are pivotal in machine learning. The connection between ResNets and ordinary differential equations (ODEs) has inspired enhancements of ResNets using sophisticated numerical methods for ODE systems. Recent advancements in numerical self-adaptive schemes, which adjust time step sizes based on the feature maps or parameters of residual blocks, have demonstrated promising results in enhancing ResNet performance, surpassing those achieved with fixed time step methods. However, these self-adaptive time step constructions lack theoretical support and can limit performance improvements since the self-adaptive time steps should theoretically depend on both the feature maps and the parameters of the residual blocks. In this study, we conduct a rigorous theoretical analysis of the residual functions associated with fixed or self-adaptive time step methods to demonstrate the advantages and rational designs of the self-adaptive approach. Subsequently, we introduce a novel self-adaptive ResNet, AdaTS-ResNet, which effectively incorporates both feature maps and parameters in its time step adjustments. Experimental results on the ImageNet and CIFAR datasets reveal that AdaTS-ResNet surpasses TSCLSTM-ResNet (Yang et al., 2020) in prediction accuracy and computational efficiency, where TSCLSTM represents the latest advancements of the methods designed based on the self-adaptive scheme of ODE. Our findings highlight the potential of improving ResNet architectures through adaptive techniques of dynamical systems, offering insights for future enhancements in deep learning models.