As data scientists, we care about making valid statistical inferences from experiments. And we've adapted well-established and well-understood statistical methods to help us do so in our A/B tests. Our stakeholders, though, care about making good product decisions efficiently. I'll describe how the way we design A/B tests can put these goals in tension and why that often causes misalignment between how A/B tests are intended to be used and how they are actually used. I'll also talk about how I've used R to implement alternative experimental approaches that have helped bridge the gap between data scientists and stakeholders.