This document discusses Shake-Shake and Shake-Drop regularization techniques for deep learning models. Shake-Shake regularization replaces the standard summation of parallel branches in a multi-branch network with a stochastic affine combination. Shake-Drop extends this idea to single-branch networks by perturbing the output of convolutional layers during training. Experiments show that Shake-Drop improves model generalization without increasing parameters or computation cost. It provides an effective regularization for both multi-branch and single-branch networks.