Neural networks have emerged as powerful and versatile tools in the field of deep learning. As the complexity of the task increases, so do size and architectural complexity of the causing compression techniques to become a focus of current research. Parameter truncation can provide a significant reduction in memory and computational complexity. Originating from a model order reduction framework, the Discrete Empirical Interpolation Method is applied to the gradient descent training of neural networks and analyze for important parameters. The approach for various state-of-the-art neural networks is compared to established truncation methods. Further metrics like L2 and Cross-Entropy Loss, as well as accuracy and compression rate are reported.