I’ll start by introducing some flaim bait:
PR REVIEWS ARE WORTHLESS!
How often are you waiting days or even weeks for a PR review? And when it is about to be approved you get a merge conflict and have to start all over again? How often are PR reviews subjective rather than objective? How often are you subjected to anal-retentive suggestions that have nothing to do with the efficiency or effectiveness of your code and all to do with somebody’s pet peeve?
Nobody ever reviews my tests
I’ve been paid to write code for 31 years now. Its never good to deal in absolutes like always or never or everybody or nobody, but I can honestly say that in the last 3 decades my tests have NEVER EVER been reviewed by ANYBODY. The only time I ever get a comment on testing is if I submit a PR that doesn’t have ANY test code. This is the biggest reason why I think PR reviews are completely worthless. Your testsuite is the most important part of your project. Without a good testsuite that CI green checkmark you waited so long to get has a lot less meaning. Without a good testsuite, developers become superstitious and fearful about changing any code. If PR reviewers aren’t reviewing the tests, what the fuck is the point of the review?
PR Reviews slow down development for little gain
PR reviews almost always slow down development and can even bring it to a standstill. I’ve experienced this at multiple companies so its not just a symptom of open source development at Red Hat. At the last company I worked at, you’d dread any inter-team PR as you would wait for weeks, even months for a PR review to get on somebody’s Sprint board. On the Quarkus team given the nature of open source development, the sheer volume of PRs is overwhelming for those that can actually review and approve PRs.
The results of the PR review backlog are merge conflicts, unintegrated code, and often paralyzed development. Every day you wait for an approval increases the probability of a merge conflict causing additional work you didn’t plan for. As PR’s pile up you end up with a bunch of unintegrated code that has not been tested together, with a CI green checkmark that could be days or weeks old. If you need to continuously work on the same part of the code base, your PR backlog can prevent you from working on bugs or improvements as you are waiting on your code to become part of master.
As a result of this, you find yourself looking for people that are rubber stamp reviewers. “Hey buddy, can you just approve my PR? The sprint ends tomorrow.”. We are all guilty of it at some point or another. Obviously, this defeats the purpose of the PR review.
PR reviewers look for low hanging fruit to deny your PR
The PR review process is a chore for developers. A good review takes time. Time most developers don’t have. This is especially true for cross-team PRs where the target team often does not have your PR on their sprint board. Because of this time squeeze, reviewers look for low hanging fruit to deny your PR so they don’t have to approve it. This is especially common for PRs that come from external non-team members as trust hasn’t been established. Because of the lack of time for a comprehensive review, they look for a simple subjective suggestion to bump you back to the CI/approval queue. i.e. “I don’t like the name of this method, class, or field.” Sometimes I think reviewers feel compelled to find something wrong. Like they aren’t doing their job if they don’t (especially younger devs).
PR reviews discourage iterative development
I’m old. I spent a significant number of years without the PR review process in place. What I’ve found personally is that I rarely do much iterative development anymore because the CI/PR review process takes so long. In the past I’d be fixing a small bug and find a piece of code or API that I think was poorly written and just rewrite it. Refactorings often create bigger PRs. The thing is, developers hate reviewing large PRs as it takes a lot of time. So, any large PR might spend a lot longer in the PR review queue. Because of this, I don’t do refactorings much anymore because a long review wait queue makes merge conflicts inevitable.
How to fix the problem?
How can we fix this PR review bottleneck?
Approve and open up a new issue instead
If the PR passes CI, is tested correctly, but you believe the code could be better, instead of demanding a change, open a new ticket and schedule the code improvement for later. Better yet, assign the new ticket to yourself.
Automatic approval after fixed time period
If a PR has the CI greenlight, but sits un-reviewed. Approve it blindly after a fixed time period.
Stan Silvert suggested this in the comments
Suggest, not demand subjective changes
Identify when you are asking for a subjective change. Be honest, aren’t the majority of PR reviews just subjective suggestions? Instead, just comment on your suggestion, or just use Github and code a change to the PR itself (Github allows you to autocommit this).
Introduce an “Approve if CI passes“
I think Github supports this, but, have a PR approval if CI passes option to PR requests. Automate it if possible.
Review tests
Read what my PR is trying to accomplish then review my test code to see if it is comprehensive enough. If you don’t have time to do that, then you are better off just approving the PR and removing yourself as a bottleneck.
Trust your test suite and CI
To speed up the PR approval you have trust in your test suite and CI builds. If your test suite sucks you are going to be more superstitious and fearful of changes to your codebase and will look for any excuse to deny a PR. If you are a project lead spend a Sprint or two every once and awhile and focus on improving your testsuite and CI builds. Introduce backward compatibility tests. Performance regression tests. Review code coverage reports. If your testsuite is strong, its ok to just gloss over PRs, or just blindly approve PRs from trusted individuals.
Encourage asynchronous, pre-PR feedback, but without Slack
As a PR submitter, you will ease a lot of pain if you talk about your design before you begin coding. Talk about API changes you want to make before getting to work. Get some input. You’ll save a lot of CI/PR review time if you do some of this upfront.
You’ll be even happier if you fish for asynchronous feedback. Instead of zoom, try sending out an email about what your ideas are. This allows people to give you feedback on their own time.
Why not just drop the PR review?
I’ll conclude my blog with some controversy: Why not just drop PR reviews altogether. Their trouble is worth more than their value. PR submitters look for rubber stampers. Reviewers don’t review tests: the most important part of your PR. Let your testsuite and CI become your automated implicit PR reviewer.
If you’re too scared to make this leap, maybe develop a circle of trust where you have a subset of developers that can be trusted to self-approve and merge their PRs. Before git, PR reviews, and CI, back in the old JBoss days of the 2000s, our open source developers earned commit rights. If you earned the trust of the project leaders, you earned the right to commit and merge whatever you wanted. Granted, in those days we also didn’t have Continuous Development, but maybe there is some middle ground we can reach.
/votetokick pr-reviews
Finally, please note that this blog is tagged as “flame bait”. I’ve received some internal emails with people that are relatively shocked I could even suggest or question the value of the PR review. Maybe I’m just trying to show people that they need to improve/streamline their review process or just get out of my way and trust me? 🙂
When you DO use mocks you are testing in an environment that is biased toward a passing test. An artificial environment will always make invalid assumptions about the behavior and stability of that environment.
What you end up with is a (hard to maintain) mock library that allows your tests to pass and gives you a warm feeling that you’ve done a great job. But you are fooling yourself.
The ONLY reason to use mocks is when test execution in the real environment is unacceptably slow. This used to be the case quite often. But with frameworks like Arquillian and super-fast startup times for app servers this is no longer a frequent concern.
I’ll end with an even more radical assertion. The difference between integration testing and unit testing is purely semantic. Even the smallest, most focused unit test pulls in a lot of the environment (JVM, OS, etc). In-container testing just pulls in more of the target environment and allows you to find more errors sooner.
Test in the real environment and get results that are truly valid. Stop kidding yourself with mocks.