In this talk a new family of frames in Rn is introduced that allow exact reconstruction after (biased) rectification of their analysis coefficients, called α-rectifying frames. In other words, coupling the analysis operator of such a frame with the (biased) ReLU function, together written as the non-linear operator
T*ReLUα:Rn → Rm, x |→ {max(0, < x, φi > − αi)}i=1m is one-to-one. In the context of deep learning these types of operators appear as so-called ReLU-layers in neural networks when the ReLU function is used as non-linear activation step. There, φi and αi arise as "learned'' parameters after an optimization procedure. The motivation of this work is to better understand these operators, where in particular, the question of injectivity shall be treated to know when perfect reconstruction of the input data is possible. For this, a frame-theoretical approach fits perfectly, but has appeared only marginally in the literature so far.
After introducing the new frame family and some basic properties, injectivity of T*ReLUα and reconstruction is discussed. Then, in order to overcome the dependence on the input x, in the second part the geometry of the frame is exploited to obtain estimations for α, for which a given frame is α-rectifying on the unit sphere. This is done by considering the polytopes that arise as convex hulls of the frame elements. With this approach one also gets well-defined regions on the unit sphere which can be used for region-specific reconstruction. Finally, since this work is in active processing, an outlook on further ideas is given.