We know that Code Reviews are a Good Thing. We probably have our own personal lists of things we look for in the code we review, while also fearing what others might say about our code. How to we ensure that code reviews are actually benefiting the team, and the application? How do we decide who does the reviews? What does "done" look like?
In this talk, Trisha will identify some best practices to follow. She'll talk about what's really important in a code review, and set out some guidelines to follow in order to maximise the value of the code review and minimise the pain.
44. • Fit with the overall architecture
• SOLID principles, Domain Driven Design, Design Patterns or other paradigms of choice
• New code follows team’s current practices
• Code is in the right place
• Code reuse
• Over-engineering
• Readable code and tests
• Testing the right things
• Exception error messages
• Subtle bugs
• Security
• Regulatory requirements
• Performance
• Documentation and/or help files been updated
• Spelling, punctuation & grammar on user messages
What to look for
https://blog.jetbrains.com/upsource/tag/what-to-look-for/
And it’s tested.
And performance doesn’t really matter
I think we’re harder on code when we read it than when we write it
Reading code is hard
We could use some guidelines and ideas
This is what people automatically think of
And this is the first blog post I was asked to write for Upsource
In fact, turned out to be such a big topic that I wrote a whole series of blog posts, that got turned into a book
The more I looked into it the more I realised that this was impossible. No-one could list everything that needs to be checked, no code could pass every check
The real answer is, it depends
One size does not fit all
STORY: Code Review at UBS
- Didn’t know what I should be looking for
- Took it as my opportunity to forward my own ideas (e.g. unit testing, clean code)
Can be solved with automation
Understand when the review happens
Inconsistent between reviewers, but even from the same reviewer
Happens when there’s no guidelines or goals
Because reviewing code is hard and boring
And because we generally don’t get rewarded for reviewing code
No understanding of what “Done” looks like
I could go on but that is not this talk
As an author you’re already in a place of vulnerability
And as a reviewer it just break your flow and it’s hard to read code you didn’t write.
- Stops code going into production
- Code doesn't necessarily get better
- We don’t get better
Code reviews massive waste of time
Which is why everyone focuses on telling you how important it is to do them, instead of how to do them well
Couldn’t this be automated?
What are the standards?
Much can be automated but mission critical systems obviously benefit from human reviewers as well
Look for gotchas: people experienced in the codebase can see if someone’s doing something that seems right but will have unintended consequences
If your code is clean and easy to reason about, if you have good automated tests, you should need to spend very little time on this
To increase the bus factor
Not necessarily the same
Here one or more people check the code to see if it could theoretically be understood by others
Can’t be automated
To evolve the code together
Could use code reviews to enforce cleaning of tech debt as you go, or migrating a legacy system. Rules like:
- Cleaning the method you’re in
- addressing any warnings (or suppressed warnings) for this code
- also may need rules about what’s committed and how (e.g. formatting changes / warnings changes are a separate commit)
You can’t automate any of this!
This depends upon the goals
When we’re so bored we give up?
Who reviews the code? Who’s responsible for signing off? Is a single “no” a veto, or is there another process?
Is it the same people every time? A set of experts?
Or a rotating pool?
Do people have areas of expertise?
Assignment can be automatic based on:
- who’s the author
- which code was touched
- which branch was touched
Watchers vs reviewers
One person?
A committee? Is it all or nothing? Or a veto system?
How are deadlocks resolved?
Where the team is located and where the reviewers are located impacts your code review process
And where you do the code review
My favourite
If you’re located in the same office
Forces you to be nice
Forces input from many
Forces resolution of deadlock?
Via skype or slack or hangouts
Don’t have to be collocated but have to have overlapping timezones
Can be fully async, works for remote and unfriendly timezones
Also includes GitHub pull requests
Also fully async
To look for in a code review
We can only identify this once we know the answers to the other questions
And it shouldn’t be a wishlist of everything
For example, if the code is related to Orders, is it in the Order Service?
If so, should it be refactored to a more reusable pattern, or is this acceptable at this stage?
How does the team balance considerations of reusability with YAGNI?
Are confusing sections of code either documented, commented, or covered by understandable tests (according to team preference)?
Is the code going to accidentally point at the test database, or is there a hardcoded stub that should be swapped out for a real service?
System constraints:
- Internal systems hosted on safe hardware don’t need the same level of security checks
- Web applications don’t need (the same) low latency performance
- If you do have regulatory requirements these should be worked into the code review (or automated) – e.g. audit
Checking design needs to be factored into the process some other way
Like up front design
Or a whiteboarding session
There shouldn’t be any surprises
Formatting checks, applying formatting
The build
Testing
- Unit, integration, end to end, performance, security
Deployment
Use due dates if possible
And priority
Aligned with code review goals and constraints
MOSCOW
- Must Should Could Would
Why are we doing this code review
When is it being done
What are we looking for
Be nice. If you’re nice, the author is more likely to listen
Comments should relate to the code, not the author
Can also leave praise
What actions need to be taken
What priority are they (labels, colours)
What actions need to be taken
What priority are they (labels, colours)
Concerns listed and prioritised
Can’t reject without comments
MOSCOW
When you know Why, When, Who, Where, What it should be clear whether it’s accepted or not
And what the next steps are
Minimise cognitive load/context switching
Prioritised feedback should make this easier
And always respond to questions as fast as you can (even if it’s I’ll get back to you) – don’t be the ghost reviewer
And resolve answered discussions
Resolve as fast as possible.
The goal is to accept the code, not to make it wait in line
Who decides when the code is good to go? All reviewers or some?
Any power of veto
If next steps / actions were clear, if priorities were clear, it should be easy to understand when the review is good to go
Objectives = consistency
Objectives means fewer surprises, greater consistency
Also can lead to code that meets those requirements before entering review
You can’t have sensible answers to any of these without knowing why first
Only when you’ve answered the first 4 can you sensibly know what to look for and create guidelines on how to do the code review.
If you have no control over the code review process but don’t know the asnwers to these questions, ask someone.
Does it do what it’s supposed to do?
Is there anything horribly wrong with it?
Does adding this feature/fixing this bug add more value than any debt introduced by maintaining this code?