This document discusses gaze-contingent ocular parallax rendering for virtual reality. It describes how ocular parallax, which provides depth cues through subtle differences in images seen by each eye as gaze position changes, can be simulated through a thick lens eye model and rendering different images to each eye based on estimated focal points and gaze position. It notes several prior works that have incorporated ocular parallax as a depth cue for virtual reality and augmented reality applications.