One fundamental problem in deep learning is understanding the excellent performance of deep Neural Networks (NNs) in practice. An explanation for the superiority of NNs is that they can realize a large family of complicated functions, i.e., they have powerful expressivity. The expressivity of a Neural Network with Piecewise Linear activations (PLNN) can be quantified by the maximal number of linear regions it can separate its input space into. In this talk, we provide several mathematical results needed for studying the linear regions of Piecewise Linear Convolutional Neural Networks (PLCNNs), and use them to derive the maximal and average numbers of linear regions for one-layer PLCNNs. Furthermore, we obtain upper and lower bounds for the number of linear regions of multi-layer PLCNNs. Rectified Linear Unit (ReLU) is a piecewise linear activation function that has been widely adopted in various architectures. Our results suggest that deeper ReLU CNNs have more powerful expressivity than their shallow counterparts, while ReLU CNNs have more expressivity than fully-connected ReLU NNs per parameter, in terms of the number of linear regions.