We propose to improve the mathematical interpretability of convolutional neural
networks (CNNs) for image classification. In this purpose, we replace the first
layers of existing models such as AlexNet or ResNet by an operator containing
the dual-tree wavelet packet transform, i.e., a redundant decomposition using
complex and oriented waveforms. Our experiments show that these modified
networks behave very similarly to the original models once trained. The goal is
then to study this operator from a theoretical point of view and to identify
potential optimizations. We want to analyze its main properties such as
directional selectivity, stability with respect to small shifts and rotations,
thus retaining discriminant information while decreasing intra-class
variability. This work is a step toward a more complete description of CNNs
using well-defined mathematical operators, characterized by a small number of
arbitrary parameters, making them easier to interpret.