willio58 3 months ago

Agreed. I mainly manage and review code at this point in my career. I find many bugs, every once in a while finding something that would have caused an outage or notable problem for users.

What I find more though is code that isn't thought through. Tech debt and code smell are real, and they affect the performance of a team. Nipping that in the bud takes quality PR reviews and time to meet with submitters around issues you find.

Knock on wood but working at the company I do now where I, along with my team, have made quality PR reviews normal.. our codebase is now enjoyable and fun to work on. I highly recommend it!

One key aspect is being “kind, not nice”. Be helpful when leaving comments in PRs, but don’t be nice for the sake of avoiding conflict.

Also if you find code reviews to be a waste of time I can reccomend one thing I do often - give warnings. I approve and give comments around things I’d like to be fixed in the future for similar PRs. I don’t hold up the merge for little things, but at the same time I won’t let the little things slide forever

  • mikepurvis 3 months ago

    "Tech debt and code smell are real"

    I think what I struggle most with is that often times there's a valid business reason to "just ship it ASAP", but the missing piece is the accountability around the conditions attached to that. Like, okay, if we don't want to fix this now because it needs to be in the next release then we can merge it as-is, but you can't document this externally, it can't because part of an API, and there can't be any further development in this direction until X, Y, and Z have been rewritten to be in like with ABC.

    I find it profoundly hard to get buy-in for those types of discussions. Everyone is happy to smile and nod and the appropriate tickets are filed with deadlines attached, but then the next release rolls around and there's new business-imperative stuff that's the focus and the cleanup tickets are quietly moved to backlog with the deadlines removed.

    Seeing this repeated over a number of years has left me with kind of a cynicism about the process, where it feels like code review is at least partly an exercise in frustration; I don't have the backing required to insist on doing it right upfront, so instead I'm really just getting a preview of what is going to land in my lap a year or two from now.

    • hakunin 3 months ago

      Couple of points on this.

      1. A lot of problems arise from too few people working on too many things. If it's one-two devs and backlog is growing, the problem is not that you have no time to fix things, but that you're understaffed. If you have enough people, then from the business perspective it shouldn't even be that noticeable that someone is refining previous work, while someone else is building the next thing.

      2. If you're not understaffed, then the best time to clean up new code is during or immediately after writing it. A phrase I like to use is "while it's still fresh in memory". You're saving time and not adding new bugs, by not having to remember everything again, load all that context back into your head.

      • godelski 3 months ago

        > If you're not understaffed

        And it's worth noting that having some fat is good. I can get it when you're a startup and you're trying to pull yourself up by your bootstraps, but at some point of time you need some fat. Too much fat is bad, but no fat is also bad. Startups run lean because they have to but when big businesses run too learn, it's called anorexia.

        • pjungwir 3 months ago

          Indeed, if you read The Goal or The Phoenix Project, they call this "slack". There is a whole theory about why slack matters.

        • BobaFloutist 3 months ago

          Another way of looking at it is that fat serves a purpose in the body. It's true that if you're optimizing for a very narrow outcome, very low (but not no) fat bodies can look ideal, but for one thing, that's actually a pretty unhealthy body, and for another, the best weight lifters in the world actually have pretty substantial fat reserves to support and sustain their muscles.

          No fat can help for going fast and efficient, but it prevents you from going strong.

    • Viliam1234 3 months ago

      > Everyone is happy to smile and nod and the appropriate tickets are filed with deadlines attached

      Once I did a final review of a backlog of a project that got decommissioned after a decade of development and use. Most of the "fix tech debt" tickets were still there in the backlog. And how could it be otherwise if the tech debt tickets always got assigned priority 2, and everything the management wanted done got assigned priority 1, and there were always more priority 1 tasks than we could fit into a spring?

      Next time, I will have no confusion about what "priority 2" actually means. It's in the backlog to make you feel happy, but you are not supposed to actually ever do it. (If by some miracle all current "priority 1" tickets get done at some moment, some of your team members will be assigned to a different project.)

    • klooney 3 months ago

      The previews are valuable though, it makes you look like a wizard when you already know how something broke.

      • pavel_lishin 3 months ago

        But it makes you feel like Cassandra when you keep warning people about the same problems, and keep running into them months and years down the line.

        You can only post the Surprised Pikachu meme so many times before it loses its luster.

      • patrick451 3 months ago

        Or incompetent. If you knew this was going to break, why did you approve it? Your only defense is "there was a lot pressure to get into the release xyz". There's not much sympathy for that defense. The animal spirits that thought the broken feature were the most important thing ever are long gone, and frustrations about the new outage caused by the previously most important feature ever have taken over.

        • klooney 3 months ago

          Don't approve it! That's actually a thing you can do.

      • mikepurvis 3 months ago

        Often it's more subtle than just "this is clearly going to take down production in X way at some point." The issue is more like feeling that the logic is too entangled and it's going to be hard to maintain later on, or that a library should or should not have been used, or something done with threads should have been async.

        So yeah, not as cut and dried as "I said it would happen and it did" but more like "I had a feeling this was going to turn out to be a pain and sure enough here I am reviewing code that represents that pain."

    • jiggawatts 3 months ago

      "There is no later." is my new mantra.

      • majikandy 3 months ago

        That reminds of codebases littered with Todos… where I like to Yoda it… do or do not, there is no todo.

        • mikepurvis 3 months ago

          A TODO is not inherently bad, but I think intent is important— how likely is it that someone will come back here purely with an intention to address that comment? If not likely, then the TODO will be taken up in the context of future refactoring and in that case it's a gift to the person eventually contemplating that work, helping them understand something about the code or context that you realised too late in the project to be able to act on it.

      • servbot 3 months ago

        I have to favorite mantras related to this:

        There is nothing more permanent than temporary.

        ... and ...

        We don't have time to do it right, but we do have time to do it twice.

  • jellyfishbeaver 3 months ago

    I am a new manager and I am struggling to get my team to understand the value in code reviews. I have been through so many rewrites and re-re-writes of spaghetti code, I am much more critical now reviewing code, and I am trying to promote this culture on my team. Do you have any suggestions?

    - The same people leave detailed comments on others' merge requests, but get discouraged when nobody else puts in the same amount of effort for theirs.

    - People blindly accept suggestions with no resistance or discussion to get the review over with.

    - People send their MRs to side channels or other teams to get their changes merged in without resistance or back and forth. (I've had to revert several of these).

    • ecshafer 3 months ago

      Good PR culture is definitely something that has to be built from the ground up, and supported top down. At Shopify, who I think has a really good PR culture we have a few things that I think help (beyond a good CICD, and static analysis tools):

      1. PRs are supposed to wait for 2 acceptances, can be shipped with 1, and can be emergency shipped with 0. So the barrier is low, but the culture supports more. We are expected to get 2 reviewers from our team to okay.

      2. Depending on the code project, we have to fill out a template for the PR, what is in it, what it changes, what to look for when we test the code, etc.

      3. Some areas have code owners that might require an additional review from a specific team.

      4. We are expected to check out, and test branches when we review them. So a quick read and LGTM is really discouraged outside of a few small cases.

      I have seen a lot of places that do the blind PR acceptance, and its tough because without this really being enforced and encouraged that culture is hard to change.

      • bobthepanda 3 months ago

        Also there is something to be said that code reviews also work well with code that is meant to be reviewed.

        The worst kind of peer review happens on PRs that are thousands of lines because nobody wants to read all that and things will be missed. Where I have seen successful code review is where people break code into reviewable bits, and those individual reviews are so fast that they actually end up bring completed faster than if it had been one giant PR.

        • anonymoushn 3 months ago

          How much additional time is needed to break a self-contained change that's the smallest it can reasonably be without breaking anything into a bunch of smaller changes though?

          • withinboredom 3 months ago

            Like 10-15 minutes ....

                git co master
                git co my-branch -- .
                git add -up . # select changes relevant to first pr
                git commit
                git reset HEAD --hard
                # and again...
            • anonymoushn 3 months ago

              The question was specifically about scenarios in which this approach wouldn't work, for example because your team doesn't want to approve PRs containing only dead code or because any subset of the change won't compile or won't preserve correct behavior without the others pieces.

              • dkdbejwi383 3 months ago

                It helps to have the right tooling in place to ship "incomplete" work, e.g. feature flags so that you can ship a very light and not ready for end-users version of some feature, and continue to iterate on it in smaller PRs.

                e.g. first pass adds a new screen and just dumps the output

                second pass adds input fields and controls

                next pass adds validation

                then add animations

                etc

                • mewpmewp2 3 months ago

                  It sounds so good in theory, but in practice:

                  1. Frequently old code needs to be touched or refactored. Feature flag would not be enough.

                  2. Even feature flag itself can be a risky addition, and might affect existing customer usage.

                  Most of the time old code does need to be touched, there really aren't those perfect new isolated features, at least in my experience.

                  • bobthepanda 3 months ago

                    If anything refactors should be behind feature flags *because* they are so disruptive.

                • jappgar 3 months ago

                  IMO this is a terrible approach, and why I hate the way feature-flags are used nowadays.

                  For example, I'm not approving anything without input validation (frontend or backend). I have no idea if you're actually going to add validation later before the fflag is removed. "Trust me bro" doesn't work for me.

                  • bobthepanda 3 months ago

                    I mean you can have validation for the features you’ve written already behind the feature flag, while holding off on the stuff that doesn’t exist yet.

                    Feature flags don’t mean throwing the baby out with the bathwater.

        • ahepp 3 months ago

          do you think this can be done at the level of commits in a PR, or is there a big advantage to making it multiple PRs?

          • bobthepanda 3 months ago

            If each commit can be exposed to a separate review then that works.

            The main issue is humans,

            1. Do not want to read long things, slowing down approvals since no one wants to touch it

            2. Cannot effectively review long things and something is more likely to slip past.

      • mewpmewp2 3 months ago

        We have these things as well, but usually people treat these are bureaucratic obstacles and don't actually perform the steps. E.g. template is ignored, and reviewer doesn't check out, just LGTM and good to go. Few people actually take a more serious look.

    • xmprt 3 months ago

      Culture for code reviews doesn't start out of thin air. Unless you have processes for CI/CD, testing, task estimation, retrospectives, incident postmortems, etc., there's never going to be a point where you will convince people that they're helpful. So start with those.

      There's always going to be pushback from adding more process, but if there's an understanding amongst the team that keeping things working is P0 then these processes will slowly/naturally come up as the team realizes that investing in them proactively will save them time down the road.

    • azov 3 months ago

      It takes time.

      For a team not used to code reviews, they might seem more trouble than they're worth at first. Most likely they will be more trouble than they're worth for the first few months. Keep doing them and eventually your smart developers will figure "if we have to do this anyway, we may as well find something useful to say" :)

      A few things you can do to make it smoother:

      - Manage expectations. Initially it may be as simple as "we just want to have a second pair of eyes on every change" or "be aware what other team members are up to" - i.e. communication first, improving code second.

      - Set up your tooling to make the process smooth. If somebody wants to just get it over with - it should be easier for them to use your official review process then to use some side channel. A vicious alternative is to make using side channels harder ;)

      - Leverage automation. Run tests, linters, static checkers, etc. on your PRs so that developers get something useful even if no human leaves interesting comments.

      - If some team members already have experience with code reviews - adjust their workload so that they can do more reviews. They are effectively training others by example.

      - Make sure that code changes under review are reasonably sized. Encourage submitting changes for review early and often. "Here is the feature I worked on for 3 months, and it's due on Friday, please review" won't make anybody happy.

      - Make it less intimidating. Code reviews are not just for finding bugs and flaws, encourage reviewers to say positive things as well.

      • majikandy 3 months ago

        This is quite good advice, I feel the “we have to do this anyway” line is more like… “so we might as well make it easy for ourselves”… eg write code that works, you self tested it through tests and manual if needed, so the reviewer doesn’t have to get bogged down in actually running it (start by adding screenshots for this but graduate to not needing them). Keep PRs as small as possible, aka multiple PRs streaming after each other for a single feature card, get the PRs in as soon as valuable and don’t block for nitpics but the shared expectation you start to agree on things that are better and they happen with the next changes.

        The general mantra being that “if it works then it shouldn’t be blocked” and developer can choose to improve the maintainability there and then or delay it to next or later PRs at their discretion. After all you trust each other.

      • Izkata 3 months ago

        > Set up your tooling to make the process smooth.

        > Leverage automation. Run [..] linters, static checkers [..]

        These don't make the process smooth unless you set them up to simply give a warning rather than block the build/merge. And with that they'll likely get ignored anyway.

        I think linters/etc should be avoided until you already have buy-in from the team.

        • azov 3 months ago

          It depends. If your codebase is already free of lint warnings - adding a blocking check to prevent new ones is no big deal. But if your blocking check means that everyone has to drop everything and spend a week fixing code - of course this won't be smooth.

          PS. Also, it’s a good idea to have manual override for whatever autoblocks you set up. Most linters already come with this feature.

          • ElevenLathe 3 months ago

            IME even better than linters is something like `go fmt` (I work mainly in Python and use https://github.com/psf/black). Rather than forcing you to manually jimmy the code to satisfy the robot, just let the robot make it look the way it wants.

      • seadan83 3 months ago

        > A vicious alternative is to make using side channels harder

        If people are looking to side channels, that is a signal. Ignoring that, ignoring that people are trying to find ways to do their jobs effectively - I think seems to be missing the point.

        > If some team members already have experience with code reviews - adjust their workload so that they can do more reviews. They are effectively training others by example.

        This can backfire. Suddenly it's just a few (potentially just one or two) team members doing all of the code reviews. They feel pressure to not be the blocking part of shipping, their reviews are then done hastily. They LGTM rubber stamp stuff from the other seniors and probably punt on reviewing the other reviews until the next day. They struggle to get their own work done, 2 hours a day code reviewing is a huge impact (if reviewing work from 2 other developers, who have been jamming out code for 6, 8, 10 hours in one day - it will take 2 hours to review that the next day).

        > - Make sure that code changes under review are reasonably sized. Encourage submitting changes for review early and often. "Here is the feature I worked on for 3 months, and it's due on Friday, please review" won't make anybody happy.

        On the other side of the coin, "Hey, review this really quickly so I can give you a series of 4 more changes in the next two hours, before finally sending the update that does the thing."

    • spankalee 3 months ago

      Culture is made, it's not accidental.

      I would raise all these issues and more in group meetings. Try to get people to understand the many different benefits review brings to both the committer and reviewer - by having them state the benefits they want or could see getting. Talk about various kinds of comments (clear bugs, performance, style, robustness, factoring and organization, etc.), and the various priorities from no-action-required to nits to blockers. Talk about the priority of reviews themselves. Reducing the latency of reviews has a huge positive effect, ime.

      • seadan83 3 months ago

        People are smart - how about listen to their concerns? Do that before hitting them over the head with a list of 'best practices' and 'desired outcomes'.

        I'm starting to come to the opinion that nits are entirely and outright counterproductive in code reviews.

        Since I learned about "Ship/Show/Ask" - it's drastically changed my view of code reviews: https://martinfowler.com/articles/ship-show-ask.html; I no longer believe reviewing every change is healthy, seeing the difference of other models I can tell it's actively harmful now. Another good approach IMO is allowing for post-merge reviews, 'ship/show/ask' is better, but why is it written in stone that a code review MUST happen before merge?

    • 01HNNWZ0MV43FF 3 months ago

      What's the social atmosphere like?

      I ask because I had this one job, where the tech team was a few nerdy programmers in one office, before COVID, and a bunch of people in a friend group I wasn't part of, after COVID.

      By that I mean, before COVID it was common for the founder to take us out for lunch or tennis as like official team building time. I loved this because I'm a picky eater and it's hard for me to make friends, so if the company makes official initiatives, it's easier for me to fit in.

      After COVID, the official initiatives weakened. The team was too big to take everyone out, and I didn't join the friend groups who naturally found ways to socialize.

      In that new environment I no longer felt like an equal member of the team, I felt like an outsider who had authority on paper but didn't have any of the camaraderie needed to get things done and survive a work day.

      Even though everyone repeatedly said I was respected and valued as the most senior programmer, I found it impossible to be a good teammate in that new environment, I felt like I was just spending all day being mean and nobody got a chance to see me as human. That was part of why I quit.

      In that environment my code reviews sucked.

      Now I'm at a remote company where once again it feels like everyone is equally non-social, and I'm just gonna ride that as far as it goes. If they get an office I'll probably cash out and go on vacation for a year

      Edit: almost forgot, the other woman who was part of the original "nerdy programmer" team, ended up also burning out and quitting about the same time as I did. She also didn't really make friends in the new environment, and seems much happier pursuing her hobbies and taking it easy between jobs

      • shrimp_emoji 3 months ago

        Can juniors even be friends with seniors? I feel like it's a "professor-student"/"private-lieutenant" relationship.

        I spend all day being mean in code reviews too, and I'm a relative junior compared to most of my team! >:] They do not see me as human because I am not human. I do not have their human emotions and concerns. My only concern is code. They still like and respect me though, it feels like!

        • 01HNNWZ0MV43FF 3 months ago

          I guess I didn't like being the only lieutenant then lol

          It felt like I had authority on paper but not in effect. And nothing was organized and nobody wrote anything down...

    • godelski 3 months ago

      > The same people leave detailed comments on others' merge requests, but get discouraged when nobody else puts in the same amount of effort for theirs.

      This is always a precarious situation. Because as soon as these people become jaded, your ability to make good PR culture will also vanish. And they can become jaded for many reasons. If these people are not explicitly or implicitly valued, they will know. If people who are doing the incorrect things are getting promoted first (or even at the same rate!), the same raises/bonuses, and on all accounts are treated equally, the employee will almost always converge to "well why am I putting in all this extra hard work if it's not benefiting me in any way?" And I don't think promises of early promotion or similar have a good effect because there's many employees who've had those promises made to them and it not follow through[0]. So there needs to be some, even if incredibly minor reward in the shorter term.

      Also, do not underestimate the value of explicitly saying "good job." There's often a huge bias in communication where it is only made when something is wrong and when good work is done that it is left unsaid. You don't have to say it for everything, but I think you'll be surprised by how many people have never heard this from management.

      [0] I wanted to share a story of an instance I had with this. I was a green (mechanical) engineer working at a startup. I had a physics degree instead of a ME, but have always been hands on. But because of this I was paid less and not valued as much. I asked my manager what I would need to do to get promoted and to be on par with everyone else. I got it in writing so I could refer back to it. At my next performance review I was just talked down to. Complaining about how I didn't do this or that (sometimes things that were impossible and sometimes they were weird like "your code may have been 20% faster but X couldn't understand it so we can't use it" -- X was a manager who had only been writing in C++ for < a year and I __heavily__ documented my use of functors). I asked about the things I did and the promises. They admitted I did all of them and even more. One of these being getting a contract (I think they put that there not expecting me to get it), and I was the only non-manager with one, bringing in 20% of company revenue while being the only person on that project. You can imagine I walked out of that meeting polishing up my resume and I was strictly a 9-to-5er doing the bare minimum from that point on. But the next manager I had, was liberal with complements and would critique instead of complain. Understood that there were unknown unknowns and all that and would actually tell me to go home when I was putting in overtime. I never worked harder in my life AND it was the happiest I had been. A manager can make or break an employee. And to part of this is that there may be ways to get back those broken employees, but you might need to figure out why they became broken in the first place. And if it is something you can fix or not. I believe environment has a big impact on employee attitudes and thus, efficiency/productivity. If passion is worth 10 IQ points, then happiness is at least a big factor in making an employee productive. Everyone can win because it isn't a zero sum game.

    • locuscoeruleus 3 months ago

      Talk to people and be curious why they don't value code reviews.

    • andirk 3 months ago

      Please continue to be diligent in your PRs! Garbage code belongs in the garbage.

      • seadan83 3 months ago

        Plenty of garbage code is making billions for all of the big tech companies that exist. It's mostly garbage code at Amazon, Facebook, etc.. Let alone the smaller companies that were "bootstrapped", had founder level code be built and then are in that constant "fixing and scaling" phase.

        More though, what kind of culture develops when someone works on something - and then it's called garbage? On the other hand, when someone is looking at their rate of progress, and they make a tactical decision that perhaps something is fine, not great, but it's f'in fine - and they are going to spend the time saved getting the project done; and then what happens when the reviewer decides the PR is "garbage"?

        I've also come to learn that I need to be more intentional about letting various fires burn. Can't fix everything, and sometimes simple & low quality is best.

        • andirk 3 months ago

          I hear you but I have had to waste countless hours and endure increased stress because someone pushed code that is way more LOC than needed, difficult to read, no docs, against best practices, on and on. I have also dealt with chunks of garbage code that have worked for years and no need to modify it and I'm fine with that. Getting an MVP out the door or a Hack Day project is one thing. Literally not being able to write good code when the dev has plenty of time to due so because they lack the skillset and/or discipline to do so is not cool.

    • Viliam1234 3 months ago

      I don't know the culture of your company, so I am just guessing here. In my experience, these are the most frequent obstacles to code reviews:

      - The team is understaffed, there is not enough time to do things properly. I can write the code in a day, but it would take two days to make it really good (to do the code review carefully, to address the comments, to review the rewrites), and at the end I would get scolded because my measured productivity got too low. In other words, in theory the company wants code reviews, but in practice it rewards those who skip them, and punishes those who actually do them. Also, the more busy people are, the longer I probably need to wait until someone finally reviews my code.

      - Big ego. Some people take a suggestion to improve their code as a personal offense; as if the other person was telling them "I am a better programmer and indeed a better person than you". (Often the same people are quite happy to give a lot of comments to their colleagues, some of them genuine improvements, many of them merely "I would have done it differently" except they insist that their way is always better.) If you have such people on your team, you are going to have problems. In my experience, about 10-20% of men are like that. (No idea about women; I would expect the fraction to be smaller, but I didn't have enough female colleagues to verify this.) I prefer teams where no one is like that, because one person like that can ruin the entire team's dynamic.

      - Lack of experience. A junior developer reviewing another person's code sometimes simply doesn't know what is good and what is bad. "If the code compiles, it's probably good? Wow, there are even unit tests, and those pass too; what else is there to complain about?" But I think that even having the code reviewed by a junior is a good thing; at least this gives them an opportunity to ask why a particular approach was chosen.

      - If you only have one person on a project, if someone working on a different project is supposed to review their code, they probably feel like "I don't know much about this project, I don't know what their constraints are, how the rest of the project looks like... I guess, unless I see an obvious horrible error, I will just approve it".

      So basically it's either the ego or external pressure (or both). Depending on your position in the company, you may be unable to do anything about that.

      Some people have an ego that you can't fix. You can avoid hiring them, but if they already are on the team and they otherwise do their job well... You just can give more attention to this in the future. For example, if you have a group of people who cooperate well, do not just randomly redistribute them to other projects once their project is over; instead maybe try giving a new project to the same group. When interviewing new team members, always ask for feedback from the existing team members.

      Some companies create toxic environment by comparing developers against each other, sometimes with the explicit goal to fire the least performing 20% each year. In such environment, obviously the code reviews also become adversarial, and people will try to avoid having their code reviewed, and some will use the code review as an opportunity to harm their competitors. In a cooperative environment, code reviews work much better; it's people working together to achieve a common goal. To create a cooperative environment, you need to treat the team as a whole. (For example, never measure individual velocity, because people will obviously soon notice that helping their team members may hurt their personal metric. You don't want to fire a person just because they spend too much time helping their colleagues with their tasks.)

      EDIT:

      If you use some automatic system to highlight problems with code, it will probably hurt the ego less than a similar comment made by another human. Just make sure to configure the tool properly, for example to write comments but never actually block the commit, otherwise developers will get really angry about the false positives.

    • zer00eyz 3 months ago

      Years ago I had a boss who, in a moment, threw a chair at me. (this is much less dramatic than it sounds).

      I would work for that man again in a heart beat. Because for as much as he was apt to yell, or dress me down, he was also willing to give good advice, to elevate, to teach.

      The office is not a safe space. You seem to know what's wrong, IM sure you have asked nicely. I am sure you offered the carrot, but does your team know you have a stick?

      > The same people leave detailed comments on others' merge requests

      Call these people out, in public, for doing good work. Tell everyone they are setting the bar and others are not living up to it.

      > People blindly accept suggestions

      Coaching, lots of one on one coaching about finding and having a voice. Lots of "team building" where you level out the playing field with the strong vs weak voices. Figure out what those quiet ones excel at and do a fun activity around that. Let them find legs...

      > People send their MRs to side channels or other teams

      Stick. Harshly worded emails. Down dressing in public. Telling your team that in no uncertain terms that "this is unacceptable behavior"

      As for the chair thrower... He was always fair, he always had his team first, I grew as a person, a manager and an engineer working for him. Its not growing happy go lucky good times while I get a pay check, its Growing pains, spreading that (pain) around is part of your job.

      • lazyasciiart 3 months ago

        As far as dodging projectiles goes: yes the office is supposed to be a safe space, and if someone threw a chair at me, one of us would not work there the next day. (Add normal caveats for "maybe they threw the chair to save you from the ninja creeping up behind you".)

        • zer00eyz 3 months ago

          The "chair" in question was 3 coat hangers and 2 frisbees... I have no idea how it held a human up.

          The "throw" was more of a shove and the thing went flying in my general direction.

          The only thing that chair was going to hurt was my feelings.

          It was far less scary than the office where I sat on an ammo shipment.

          • lazyasciiart 3 months ago

            I'm shocked, shocked, that you actually were just lying about your experiences to make whatever point you wanted. The one time I worked with a guy who lied about throwing chairs at people it turned out he was a wanted serial killer, and the company almost went bankrupt. Of course none of that is true, but if it was, it would be a colorful way to back up my refusal to work with people like that!

          • 01HNNWZ0MV43FF 3 months ago

            How courageous of you to do a scary job, not to protect other people from having to live hard lives, but so you can be a prick to them on bulletin boards. Maybe I should have done that lol

      • pavel_lishin 3 months ago

        > The office is not a safe space.

        It well fucking should be a space where I'm safe from my boss throwing large projectiles at me.

      • seadan83 3 months ago

        Some work cultures are too conflict adverse. I think it comes down to personalities of course, I want to generalize it to West coast people being conflict adverse. In these examples, it's all the ra-ra cheerleading you'd find on Linked-In, everything is awesome and super fast shipping squirrel

        In these situations I find conflict is buried rather than resolved in a calm & understanding way. Without the latter being the example, the incentives are to go along to get along - meanwhile there is no longer a mechanism to resolve a disagreement. Suddenly then in review, you're "that guy" that is delaying things, and the both reviewer and reviewee don't have a healthy culture to talk through the issues.

        Sorry for the rant, I think the downvotes are a reaction to the shock that some teams can have healthy conflict and work through them - and it can look like yelling sometimes (often yelling is just that, yelling.. but when a team is actual friends with another - they will have their own unique ways to address conflict).

  • danielmarkbruce 3 months ago

    Never realized this was a debated topic. Are there smart people who believe in not having code reviews? What's the best argument against code reviews?

    • bakje 3 months ago

      I’ve spoken with a CTO who was against them because they add too much overhead, partly because they’re too late in the process.

      He encouraged his team to discuss an approach beforehand or to work on something together.

      Other than that they had a lot of tests and a very structured codebase, I guess it worked for them.

    • dkdbejwi383 3 months ago

      I worked in a team that didn't do reviews because _everything_ including spikes, research, etc, was done by two engineers pairing. This was remote, cameras on, all day.

      I found it utterly exhausting. I was somehow working at 100% capacity but producing output at 50% because so much of my cognitive bandwidth was taken up with the pairing process.

      • JackMorgan 3 months ago

        I've been on full time pairing teams but noticed 2-5x more throughput because there's fewer meetings, code review, and surfing the web. It's exhausting because suddenly I'm actually writing code 7+ hours a day and that's really hard.

        Every other team I've been on without pairing, writing code is at most 3 hours a day. The rest of the time just gets chewed up by other communication channels.

    • seadan83 3 months ago

      I'd guess I've done somewhere in the ballpark of 1000 to 2000 code reviews. After that time, my opinion of them has drastically changed. I can give something of a long laundry list (my apologies for it being kinda ranty...):

      CR slows things down & often unnecessarily. Overly time consuming

      Time lost with needless explanations.

      Somewhat impossible when reviewer pool is one or two people. A bigger reviewer pool is probably not going to have to have context.

      Creates a bias to not ship. "waiting for CR" can be a big impediment for shipping. Perhaps that tonight was the good time for you to get something into prod, not tomorrow - but you're waiting for a CR.

      It's an example of process over people (AKA: cargo-culting). The context of when/where/how/why CR is important and is situational. CR best practices are going to have different results in different situations. Often CR is just done because it is a best practice, blindly. It would be preferable to think deeply about what is important & why - rather than just a "oh yeah, here is another hoop we jump through because #AmazonDidIt"

      Stratifies the team. The senior/elite/favored people invariably get easier times during review.

      CR can get political. Scrum masters & managers scrambling to unblock a team member and get something reviewed. Which is great for that one time, but reveals an utterly broken process otherwise. When a CR is "escalated", what's the chance the reviewer will actually spend the time needed and the back-and-forths to get things "properly" reviewed?

      Conducive to nitting, which are are useless and counter-productive. Time spent on nits is waste and draining to keep coming back to something to then tweak "is it good enough yet?"

      Drive by reviews without context

      Coming to agreement during CR is difficult. Not everyone is able to observe/experience and/or resolve conflict.

      CR is late in the code development process; it's one of the worst times to get feedback after something has been done, polished, tested, made production ready - and suddenly then someone thinks it should be done differently (which is a lot easier to see when something is already done and working). It's akin to writing code 3 times, once to get it to right, a second time to get it nice, and a third time for whatever the hell the reviewer wants.

      Shows lack of trust in team. It is gatekeeping.

      Does not scale well. I was once told by a reviewer that I write code faster than they could read it. (At the time I reviewed about 6 PRs every day, and sent out about 3. I was doing 80% of reviews, it was a shit-show; the reviews I received were slow and useless - the team members were focused on trying to deliver their over-stretched projects; I was too streched to give proper feedback and not work 100 hours a week).

      Better options exist, namely the "ship/show/ask" strategy: https://martinfowler.com/articles/ship-show-ask.html

      That latter branching strategy puts it up to the code author to decide, "does this comment typo fix need a review? Does this fix in this code that I authored originally and know everything about - really need a blocking review?" In that last case, if a person can just merge a bunch of precursor refactors right away, they can get that knocked out - the PR they send out for review is then clean & interesting; and there is no time lost managing branches or pestering someone

      A second option to help make things better is to allow "after-merge" reviews too. A few teams I've been on, we did that enough where we learned what kinds of things are good to ship on their own. To another extent, there wasn't the bandwidth to review everything. It was not a bad thing.

      • nickm12 3 months ago

        A number of these critiques seem like problems with the work culture rather than code reviews. Yes, nitpicking, cargo culting, gatekeeping, and favoritism are problems, but are they problems with _code reviews_ specifically?

        Some of the others are actually desirable features of code reviews in my experience. Yes, we don't want needless explanations or the code reviews only going to a small number of people, but my experience is that code reviews are a good mechanism for encouraging authors to write self-explanatory code and for sharing expertise around the team.

        This list just doesn't jibe with my experiences.

      • danielmarkbruce 3 months ago

        This is a bunch of downsides, but I guess I'm asking, does anyone intelligent think that given the alternatives, on balance, it's better to not do them? You seem quite against them - what would you propose as a concrete alternative and what would the result likely be?

        • seadan83 3 months ago

          Just as there is no single strongest "for-argument", same with the "against-argument" -> context matters more than 'best practice'.

          FWIW, I noted:

          > Better options exist, namely the "ship/show/ask" strategy: https://martinfowler.com/articles/ship-show-ask.html

          AT the end of the day, IMO it's like Agile & Scrum. You actually have to think hard about your context, can't just go in and apply a boiler-plate one-sized fits all solution. (I've got a chip on my shoulder these days as I feel too many in IT don't really want to think hard about their context. Too much of: "I want the magic formula for my team to 'go fast!'". The mantra: just do the stand-ups, do the retro, do the code review, do the planning poker, add checkstyle (a code syntax linter) - it'll all go great! I've now learned to ask - but did the project ship in time? Were any of those items actually useful in this context? If so, tell me exactly why and give details. I've also learned that if a team is far in the stone age, trying to adopt all of the best practices is likely to take longer than the remaining lifetime of that team and/or company. Which means they will do nothing further useful other than churn on process changes. Which brings me back to CRs, how many of them are actually useful? Some CRs are useful for sure. Though, how can a team spend more time doing useful CRs and less time doing non-useful CRs. What's more, every PR is a blocking activity, if we are blocking development for a usually not-useful activity -> that is a problem. The alternative does not have to be to throw out CR completely - I already mentioned two: "ship/show/ask", & post-merge review)

  • giancarlostoro 3 months ago

    > “kind, not nice”

    Always when joining a team the first thing I tell devs is "I don't care how critical you are, just be honest" I think setting expectations early on is very critical. I think people not feeling attacked / too defensive of code is a good step forward. People who vehemently defend their code are bad developers imho.

    • rty32 3 months ago

      Well, I have a colleague who spends more time defending his code by giving a list of wrong reasons, rather than actually fixing them. Happened more than once.

      I just move on. At some time he will realize how bad his code is.

    • 01HNNWZ0MV43FF 3 months ago

      It sounds like radical candor and I like it

  • saulpw 3 months ago

    How do you phrase these warnings? "Next time.."? I have a hard time being serious with my own warnings if it's fine enough for now.

    • elcomet 3 months ago

      We prefix our comments with "minor:", and all employees know that this means it's something that would be nice but not necessary to merge

    • seadan83 3 months ago

      Use the delete key. (My apologies for the snark, but I'm kinda serious, don't send those type of review comments at all).

      If it's fine for now, it's fine. If it's not fine - then speak up.

      If you do have such concerns, "fine for now" - that is something for an offline conversation - outside of the context of a CR. Which is also a good litmus test for how important a comment is. Would you schedule a 15 minute meeting to discuss the issue with your colleague? If not, then why is it okay to do that to someone during a CR?

    • audiodude 3 months ago

      "This is okay for now, but we should think about how we want to serialize these objects. Feel free to remove the N^2 algorithm in a follow up."

      • est31 3 months ago

        That works great in a setting where you are both employees of the same company, and you respect each other, but it often doesn't work in the open source world, people just disappear and you never hear from them again. It is possible that they do file follow-ups, but in my experience it's rare.

        • sfink 3 months ago

          Yes, even within a company my threshold for accepting a change can vary pretty widely depending on my experience and relationship with the author. For an external contributor or someone I've never collaborated with (by reviewing code or having my code reviewed), I don't accept the code until almost everything is worked out to my satisfaction. With someone I work with regularly, it's not uncommon to accept a change with a comment like "this is all good, but you need to take X into account which will change almost everything in this patch" (I exaggerate, but only slightly). I know whether an update could be problematic and whether it is necessary to see it again. Sometimes there are a couple of obvious ways that something could be done, they picked one but weren't tied to it if I had a reason for picking a different one, I picked a different one for $REASON.

          Most are somewhere in between.

          Though in some ways it works the other way around. For an unfamiliar open source contributor, I need to be confident that the change is worthwhile. I will be lenient on stylistic things, and I'll just land their patch and then fix it up afterwards. For someone I've worked with a bunch (whether a familiar contributor or a coworker), I will trust their opinion on the underlying quality of a change, but be less tolerant of unnecessary stylistic differences since they should have already come into alignment on those and it's more likely to be an oversight if they missed something. (Plus, I don't want to be fixing up their changes after the fact, given that >90% of patches will come from regular contributors.)

        • seadan83 3 months ago

          To add to this, "no time like now" is real. Follow-ups rarely happen, tracking them is a pain. Making someone accountable for it is unfair, particularly if they disagree with the follow up (and presumably they do, otherwise they would have done the thing in the first place).

  • closeparen 3 months ago

    >time to meet with submitters around issues you find.

    What! I would be livid if someone scheduled a meeting with me about a PR. We have way too many meetings already, this is one of the only processes that is mercifully async.

    • fiddlerwoaroof 3 months ago

      Me too, I really dislike meetings that could have been handled by asking me three or four questions in slack or in the PR and then waiting fifteen minutes or so for me to answer.

    • hyperadvanced 3 months ago

      It doesn’t need to be formal or very long. I personally enjoy a PR meeting where we can poke at the code and understand it over someone dumping 2,000 lines of code in my lap at lunch time and hoping to get their spaghetti to prod by dinner

    • danielmarkbruce 3 months ago

      Livid? About a meeting to discuss work? Some comments in slack etc sound worse than intended, and people are aware of that and sometimes go out of their way to say it in a way to it's received as intended.

    • willio58 3 months ago

      To be clear I make time to meet if the submitter wants to talk through things. I don’t require meeting on every PR. I meet as needed on PRs, pretty infrequently as people get up to speed

    • seadan83 3 months ago

      "in-person" review can be the best I find. If a CR is going to have a dozen comments about everything, it's a disaster to do that async - the author is going to feel attacked. The dialog to resolve the conflict & disagreements is probably not going to play out well.

      Personally I like it when teams start with in-person CRs if they are not used to them. Only after healthy relationships and conflict resolution mechanisms established, do the PRs go async. I also like a rule that there should never be more than 2 round trips on communications in a CR, otherwise it should be done at the same time.

      Delaying small changes to allow for such round-trip times is not feasible in a lot of environments. There's too much to be done. Not worth it. When projects do fail, it's very rarely that some manager slaps a big "failed" label on it. I bring that up to say that projects fail all the damn time because the team is moving too slow - and individuals often don't appreciate when they actually did need to move faster (the feedback loop is often missing, of, "oh, we actually were too slow, we needed to ship faster than we actually did. The JIRA, planning, estimation, CR, best practices, time spent fixing linting - it all doesn't matter cause this project actually failed - it took too long.")

godelski 3 months ago

I think this is really important in that it is bigger than "code reviews." It does show how people greatly misunderstand statistics[0]. And what's even funny is at surface level the claim that code review "does nothing" __sounds__ ludicrous. But people "believe" because they are annoyed with code review, not because they "actually" believe the results.

But statistics are tricky. With the example given in the article "15% of smokers get lung cancer" compared to "80% of people with lung cancer smoke." These two are not in contradiction with one another but are just different ways to view the same thing. In fact, this is often how people will mislead you (or how you may unintentionally mislead yourself!) with statistics.

Another famous example is one that hits HN every once in awhile: "Despite just 5.8% sales, over 38% of bug reports come from the Linux community"[1]. In short this one is about how linux users are just more trained to make bug reports and how most bugs are not system specific. So if you just classify bugs by the architecture of those submitting them, you'll actually miss out on a lot of valuable information. And because how statistics work, if the architecture dependence rate was as low as even 50% (I'd be surprised!) then that's still a huge amount of useful bug reports. As a linux user, I've seen these types of bugs, and they aren't uncommon. But I've frequently seen them dismissed because I report from a linux system. Or worse, support sends you to their page that requests you to "upvote" a "feature" or bug issue. One you have to login to. I can't take a company like that seriously but hell, Spotify did that to me and I've sent them the line of code that was wrong. And Netflix did it to me saying "We don't block firefox" but switching user agents gave me access. Sometimes we got to just think a bit more than surface level.

So I guess I wanted to say, there's a general lesson here that can be abstracted out.

[0] Everyone jokes that stats are made up, but this is equally bad.

[1] https://news.ycombinator.com/item?id=38392931

  • jiggawatts 3 months ago

    > support sends you to their page that requests you to "upvote" a "feature" or bug issue.

    Microsoft does this for enterprise products where customers might be paying $100K/mo or even millions.

    “We hear you, but your complaint is just not popular enough so go away.”

    “Sure it’s a catastrophic data loss bug that ate your finance transactions, but if other people can’t identify that their seemingly unrelated crash is the exact same issue then no fix for you.”

    “Now that you did get ten thousand votes on an issue titled ‘Consiser doing your job’, we’ve decided to improve your experience by wiping out the bug forum and starting a new one from scratch that has fewer scathing comments from upset users.”

    • lhamil64 3 months ago

      My company/team has very different processes for bugs vs feature requests. If a customer opens a ticket and we determine it's a bug, we will generally fix it in the reported release and later (unless it's a security vulnerability or other major problem). But for feature requests we just tell them to submit it to a community and we evaluate it to see if it's valid and something we'd likely implement given the other work we have on our plate, but not necessarily do it any time soon.

      • godelski 3 months ago

        Sometimes feature requests are actually bugs and can be illustrative of one not properly understanding design.

        But I think it is important how user feature requests are interpreted. They have a frustration that you might not be aware of but they aren't aware of all the code and constraints. It can even be in design, which is still important. Very often there is a way to resolve a feature request that is not what the user explicitly asks for. But to do that you have to read between the lines, and carefully. Of course, some people go completely the wrong way with this and cough Apple cough decide that they know what is best for the user. It's totally a hard balance to strike, but I think it is very common for it to be framed much simpler.

        There's the joke that the user is dumb, and maybe they are, but that doesn't mean the issue they face is. It's not always dumb when a person pulls on a door that says push, because it may actually be that the sign and design are saying different things[0]. And personally, I like when users suggest methods of resolving the problem. I might throw that in the garbage, but it can often give me better context clues as to what they're trying to ask for and really does tell me if they're thinking hard about the problem that they care about the product. They just don't have the same vantage point that I do, and that's okay.

        [0] https://www.youtube.com/watch?v=yY96hTb8WgI

        • jiggawatts 3 months ago

          > Sometimes feature requests are actually bugs

          You can have two missing features that add up to a bug in total. For example, I worked with two cloud products from the same vendor where a missing back-end HTTP feature of the CDN product interacted with a missing HTTP front-end feature of the PaaS service such that the two products that have a "natural fit" together couldn't actually be used in combination.

          This made many architectures that ought to have worked a no-go, forcing customers into contorted design patterns or third-party products.

          IMHO this is a bug ("Can't use your products"), but each team individually marked it as a missing feature and then they just ignored this for about three years.

          Also: not enough people voted the missing features up because not enough people were using the products... because they couldn't.

          I know this is a bit off-topic here, but it circles back to the "statistics is hard" intro in the original blog article. You can make catastrophic business mistakes relying on statistics you don't full understand, such as this example of "you won't get many complaints for unusable products".

          You will get many complaints however for the usable products... they have users to complain.

          https://en.wikipedia.org/wiki/Survivorship_bias

          • godelski 3 months ago

            > because not enough people were using the products... because they couldn't.

            I don't think this is off topic at all. I think is is explicitly on topic, at least the the underlying one. Not just statistics are hard, but it's hard to measure things and even harder to determine causality. Which is often the underlying goal of statistics and data science. To find out why things happen. Measurements are incredibly difficult and people often think they are simple. The problem is that whatever you're measuring is actually always a proxy and has uncertainty. Often uncertainty you won't know about if you don't have a good understanding of what the metric means. You'll always reap the rewards when putting in the hard work to do this, but unfortunately if you don't it can take time before the seams start to crack. I think this asymmetry is often why people get sloppy.

            • jiggawatts 3 months ago

              The example I like to use is the confusion around COVID statistics, and how people mis-interpreted them.

              For example, the rate of infections (or deaths) per day that was reported regularly in the news is actually: rate of infections * measurement accuracy * rate of measurement.

              I.e.:

              If more people turn up to be tested, the "rate" would go up.

              If the PCR tests improved, the "rate" would go up.

              A similar thing applies with hospitalisations and deaths. It might go up because a strain is more lethal than another strain, or because more people are infected with the same strain, or because more deaths are attributed to COVID instead of something else.

              It doesn't help that different countries have different reporting standards, or that reporting standards changed over time due to the circumstances!

              Etc...

              It's complicated!

    • marcosdumay 3 months ago

      You mean they don't censor the bug reports and try to gaslight you into believing their software is flawless anymore?

      That's a tremendous improvement when compared to the time I interacted with them.

      • jiggawatts 3 months ago

        > don't censor the bug reports

        They do, but eventually even the polite but grumpy comments build up to the point that it looks bad. These comments are public -- that's the whole point -- so the only way to hide them is to delete them. Normally this upsets users even more, so the "trick" is to "improve" the service by dropping the entire forum on the floor and starting over with a new piece of software. Not because it's better in any way, but because it is an implicit DELETE * FROM "BUGS".

        Microsoft is on their... what... third forum now? I lost count.

  • ehsankia 3 months ago

    Basically, code reviews also happen to find a lot of other non-bug stuff (probably nits and style issues).

    That's why looking at % is dangerous. You could be finding 5 bugs per code review, which is a lot, but if you also make 30 other non-bug comments, suddenly "only 15% of comments are bugs".

    • godelski 3 months ago

      Oh I completely agree. There are just a lot of things that can't so easily be measured and many things that can never be. But that doesn't mean they don't matter. Following the point you're making, enforcing good style can result in bugs not happening later on or even save a lot of future time as your code doesn't slowly spaghetti. And I think that's one where people often miss. That spaghetification happens generally through a slower process. By dozens of commits, not by a handful.

poikroequ 3 months ago

The value of code reviews really depends on the code and the person working on the code. For a team who have spent years working on the same repo, code reviews may not hold much value. But if you have a new guy on the team, or a junior, you'll definitely want to review their code.

Code reviews can also do more than just find bugs. You can point out a better way of doing things. Maybe this SQL could be more efficient. Maybe you can refactor some bit of code to make it more robust. Maybe you should put a logging statement here. This method name is confusing, may I suggest renaming it to xyz?

  • dkdbejwi383 3 months ago

    > But if you have a new guy on the team, or a junior, you'll definitely want to review their code.

    Reviews _from_ juniors or new team members are also really valuable, as they don't have the history or tribal knowledge that others may have. They'll often spot things that have gone overlooked because "that's how it is".

  • gwd 3 months ago

    > For a team who have spent years working on the same repo, code reviews may not hold much value.

    I have definitely found bugs [ETA during code review] in code written by very senior developers in code they've been familiar with for over a decade.

  • kqr 3 months ago

    I got really curious and I'd like to ask you some follow-up questions on your experience in reviewing and receiving reviews. Do you mind shooting an email to hn@xkqr.org?

  • phito 3 months ago

    Code reviews also keep the team up to date with what is changing in the code

    • seadan83 3 months ago

      Commonly stated, except I think that statement is wrong. A PR is the thing that has the effect of sending emails to the team, it is the PR that leaves behind a webpage & summary of what has changed. It is the PR that is doing the work there, not the CR. Which implies, you can open and then immediately merge PRs without CR to get that same benefit.

      CR does give others a chance to study code, and become familiar with it - but that is different from "keeping up to date."

jt2190 3 months ago

I’m not sure why the author ignores the “… that should block a submisson” part.

The abstract of the paper:

> Because of its many uses and benefits, code reviews are a standard part of the modern software engineering workflow. Since they require involvement of people, code reviewing is often the longest part of the code integration activities. Using experience gained at Microsoft and with support of data, we posit (1) that code reviews often do not find functionality issues that should block a code submission; (2) that effective code reviews should be performed by people with specific set of skills; and (3) that the social aspect of code reviews cannot be ignored. We find that we need to be more sophisticated with our guidelines for the code review workflow. We show how our findings from code reviewing practice influence our code review tools at Microsoft. Finally, we assert that, due to its costs, code reviewing practice is a topic deserving to be better understood, systematized and applied to software engineering workflow with more precision than the best practice currently prescribes.

“Code Reviews Do Not Find Bugs: How the Current Code Review Best Practice Slows Us Down”

https://www.microsoft.com/en-us/research/wp-content/uploads/...

  • gwd 3 months ago

    The "that should block submission" is always one of the trickiest parts. There's a saying: "Everyone that drives slower than you is an idiot, and everyone that drives faster than you is a maniac." But it is true that going faster increases danger, and there is a speed that appropriately balances benefit against risk; but everyone perceives it differently.

    The same is true of "code smell" issues: Everyone who asks you to change things is a pedant who's slowing down the project for pointless aesthetics, and everyone who pushes back against changes you've requested is a cowboy who is going to make the code harder to maintain in the future.

    So in the paper, how did they decide whether a non-bug change "should block submission" or not?

    • jt2190 3 months ago

      If a comment points out a bug/defect [1], then it should block.

      If you think about it, as bugs/defects are removed, the code becomes more correct and thus more stable because it doesn’t need additional changes to remove bugs, so removing bugs reduces the need for future maintenance.

      If we block due to future maintenance concerns what we’re really asserting is that the requirements are unstable, and that removing today’s bugs is less valuable overall because requirement changes will remove the line of code with the bug and replace it with a new line of code with a new bug.

      I suppose it depends on the code review process at at a given organization whether that’s the appropriate point at which to block code for architecture/design issues. In my experience the code review step is much too far downstream in the development process and much too narrowly focused on a subset of code to be an effective place for design changes that have significant impact on maintenance.

      [1] The paper authors reviewed data in Microsoft’s internal code review tool, which is proprietary, so we can’t see what the specific bugs were.

topkai22 3 months ago

Code reviews don’t just find bugs, they prevent them from being introduced in the first place.

Developers are more careful about what they write and submit when they know they’ll have someone else looking at it.

We went through a couple iterations of our code review policy on a multi-year project a while back. We never really saw code reviews catch a substantial number of bugs over time, but whenever we pulled back on code reviews we definitely saw the production error rate go up.

epolanski 3 months ago

My beef with code reviews is that often they lead to tremendous amounts of wasted time, that's many thousands spent in a single week sometimes for simple pull requests.

Working from 6 years, not much, and not in as many places like others, I have built the opinion that code reviews are like tests, they should be used as a tool when they are necessary, they shouldn't be the default for every change.

In best case scenarios is the person creating the pull requests that requests reviews or decides the place needs tests.

My opinion obviously applies to product software, for libraries, especially when publicly exposed you want as much discussions and tests as possible.

  • closeparen 3 months ago

    The discipline of putting up small, coherent, explained, tested, and reviewable units of change, that you have looked over and feel comfortable showing off to other people as your work product, is 80% of the value for me. Whether anyone else actually thinks about it deeply or has something useful to say about it is secondary.

    • goosejuice 3 months ago

      Indeed. It's kind of like rubberducking.

  • simonw 3 months ago

    This is certainly true for blocking code reviews. I'm interested in exploring the alternative, which is review-after-commit. There's an article describing those here: https://copyconstruct.medium.com/post-commit-reviews-b4cc216...

    Code still gets reviewed, but you don't end up with PRs languishing for hours, days or even weeks waiting to get a review from someone.

    • interactivecode 3 months ago

      I review a lot of code with the mindset of yes and…

      Basically when I start the PR is approved in my head until I find something blocking. I.e. major problem that causes dataloss, big performance issue or breaks other code. Anything else is a minor comment at most. The PR is approved by default. This gives the dev responsibility and ownership. Plus it increases release cadence

      Doing a follow up PR to improve or fix something is just as fast as blocking the MR, but blocking the MR can be bad for morale.

      This strategy might work better in young startups where having the feature _exist_ is better than not shipping. in my experience this builds up responsibility and ownership and removes the whole policing of other peoples work vibe around code review.

      Also decisions and discussion around formatting, when to test, design, features, functionality, architecture should not happen during code review, they should have happened way before coding, or collaboratively while working. Code review is the worst time for that stuff, imho it should be to sanity check implementation.

    • YZF 3 months ago

      I used to do a lot of this in a small team where we didn't block on reviews (we did reviews but didn't block). I was a senior developer on the team and I'd take time to read through new sections of code that came in. That worked pretty well.

      Interesting enough, this bit of code/project, that didn't have super strict code review requirements, but had a lot of tests, is the code I worked on that I would consider the most robust/high quality. It was run by > 10 million users in a fairly important application. It wasn't huge and it had good tests (and was generally quite testable).

      That said, it's really hard to control review-after-commit. Maybe we need better tooling for that. In my case, for the areas of code I was involved in, it was small enough to track in my head.

    • pavel_lishin 3 months ago

      I'd love to explore that alternative, but I'm not sure how to actually make sure that any errors found/changes suggested after the commit is pushed and deployed actually get implemented.

    • kqr 3 months ago

      I really like this idea! It's not like I need to check how any individual developer approaches their work (although that could become a useful mentoring session in some cases) but what matters is what it looks like before going into production.

      The main difficulty I see with the described approach is that different changes will be interleaved in the trunk and it might be hard to extract just one of them to deploy. But that's what feature flags are for!

  • ycombinatornews 3 months ago

    There’s probably more to this story. It’s somewhat a skill and maybe art to create MRs so that they are easy to review and get approved.

    It is also true for the feedback on MRs. Providing compressed and succinct feedback makes it faster to address.

    Almost like “if the change is difficult, refactor and make the change easy”. There are many ways to do one thing, some are better, some are not.

    Companies and teams that have good review culture are successful in using the reviews as a tool.

  • kqr 3 months ago

    This is so far from my experience with code reviews that I'd like to ask some questions to follow up on your experience. Do you mind shooting an email to hn@xkqr.org?

swatcoder 3 months ago

As with most processes, the dilemma with code reviews is in figuring out how they impact your team and your organization.

In a huge org, with thousands of engineers that's already burdened by hours per day of interruptions and process overhead, and release runways that already involve six stamps of bureaucracy, mandatory code revies have very little downside (it's in the noise) but highly variable return (many people are just droning under the weight of process). The org loses nothing much for mandating it, but only certain teams will see a lot of value for it.

On the other extreme, a startup with five engineers will get backlogged with reviews (which then get shortchanged) because everbody either is under pressure to either stay in their high-productivity flow or put out some pressing fire. The reviews probably could catch issues and share critical knowledge very regularly, but the org pays a pronounced penalty for the overhead and interruptions.

People long for "one size fits all" rules, crafting essays and publishing research papers to justify them, but the reality of what's right is often far more idiosyncratic.

  • kqr 3 months ago

    I don't disagree with the idea that "it depends" but for me, code review has generally worked better with lower overhead in the "startup with five engineers" type organisation. Can I ask you some follow-up questions on your experience in reviewing and receiving reviews? If so, send me an email at hn@xkqr.org!

david2ndaccount 3 months ago

In my experience, code reviews catch a lot of bugs. However, if you find yourself catching the same kind of bugs over and over again in review you should be finding ways to catch them automatically without involving a reviewer (static analysis, tests, linters, etc.)

  • cjriley 3 months ago

    Completely agree on utilizing static analysis as much as possible. My first instinct when finding an issue in a code review is to think, "could we have caught this with a <lint rule> of some kind?"

rebeccaskinner 3 months ago

I think the article is taking the wrong view. The statistic cited by the article that 15% of comments were about a bug seems in line with expectations, and I think it would only really be worth discussing if the number were _much higher_ or _much lower_.

Instead, I think there are two far more interesting questions to ask:

1. Is the rate at which code review identifies defects sufficient to use code review as a detection mechanism for defects?

After nearly 20 years of writing software, I'm pretty convinced that the answer here is no. Some reviewers are better than others, and some circumstances are more favorable to finding defects than others, but we should generally try to build processes that don't assume defects will be caught at a substantial rate by code review. It's nice when it works, but it's not a reliable enough way to catch errors to be a load bearing part of the process.

2. Is mandatory review of all code justified?

This is the one I'm on the fence about. In an environment where code reviews are high priority, people are trained to review effectively, and there are minimal organizational politics at play, then I hypothesize that allowing PR authors to decide whether to get a review or not would generally improve quality and velocity because code would ship more quickly and code that would benefit from a review would still be reviewed. In that scenario, I think we'd see the benefits of getting things shipped more quickly when they don't require a review, and reviews would be higher quality because code being flagged for review would be a positive sign to pay more attention.

Unfortunately, I could be wrong, and it's not the sort of experiment anyone wants to risk their reputation pushing for, so I doubt we're likely to see an experiment at a large enough scale to know for sure. If we're going to fail one way or another, I'd prefer to fail by doing too much code review rather than not enough.

  • kqr 3 months ago

    [flagged]

bluGill 3 months ago

Bugs are 'easy' to fix, I don't worry about finding them. I worry about the interfaces as they quickly become a nightmare to change just because of all the users.

  • kqr 3 months ago

    I'd be interested to hear more about your experience with code reviews. Could you send an email to hn@xkqr.org so that I can ask some follow-up questions, please?

29athrowaway 3 months ago

If you have a spellchecker, code formatter and a linter, code reviews improve significantly. Much better than having to do that work by hand, or reviewing it by hand, leaving code reviews for higher level ideas.

  • zelos 3 months ago

    Exactly. Code reviews shouldn't be about code formatting or anything that can be automated away with linters, formatters, code coverage limits and static analysis. If the build is green for the PR, then all that is already acceptable.

BlackFly 3 months ago

Bear in mind I am pro code review, but...

There is a trick in pharmaceutical research where you test a potential candidate drug against placebo to yield a bad study that seems to show benefit. The reason it is a trick is because in many cases the alternative isn't placebo, it is an existing treatment. Then doctors learn about a more "modern" treatment, favor it for being modern and the better treatment may not be prescribed.

The alternatives to code review aren't doing nothing. The article claims that code reviews find a defect per 10 minutes--but only in the first ten minutes. By this same argument (ignore qualifications, extrapolate the numeric result), fast automated testing can potentially find thousands of defects in a second--if they run that quickly and the defects were already tested for. Static analysers, pair programming, documentation these are all alternatives and there are many more.

If you're spending an hour a day reviewing code then you are spending 12.5% of your time doing it. Using it that way comes with an opportunity cost that may be better spent depending on your particular organization and code base. Of course, analysing everything to death also has an opportunity cost, but not analysing it generally leads to moving goal posts where the supposed rationale for doing something keeps changing. Today its purpose is uncovering defects, tomorrow it is knowledge sharing, the day after it is security. It is all of those things, but other practices may achieve these goals with more effective use of time and people's patience.

So why am I pro code review? Because choosing to interact and work together as a team, to learn about and compromise with your colleagues makes for good team building while serving other technical purposes. I do think that pair programming can achieve this to a greater level while also being more taxing on individuals. This assumes you control the process and own it though, if it has just become a rote ceremony then my feelings are you probably aren't net benefitting from it: you are simply doing it because you have no choice, not because you believe it to be a valuable use of time. If you have experienced both, a culture where people choose and find value in code reviews and a culture where people are forced to do it unquestioningly, then you may have witnessed how a dicta can destroy the prosocial value of a practice.

nitwit005 3 months ago

> Developers spend six hours per week reviewing. This is a bit too much

It's extremely difficult to adjust the time spent on reviews. The options are unattractive. Do you start blindly accepting changes when you hit the limit, or just stop and not let people merge code?

  • dakiol 3 months ago

    Why should one block merging code? The idea of someone spending hours working on some code changes being blocked by another individual that doesn’t allocate time for reviewing is alien to me. We are all professionals, let people merge their changes and make sure you find time to review them. If you can’t review them and they still need your approval, then approve.

    Unless you don’t trust your colleagues. If that’s the case, then code review is doomed anyway

some_furry 3 months ago

> During the first 60 minutes of code review of the day, the reviewer finds roughly one defect per ten minutes of reviewing – as long as they review less than about 50 lines of code per ten minutes.

Oh.

It normally takes me a few seconds to find bugs in code.

I always felt this was average performance for assessing software. If the average time is ten minutes per defect, I need to recalibrate my expectations for myself.

  • jonobird1 3 months ago

    It really depends on the code. To find a CSS bug, yes easy peasy. To find a logic hole in a payment integration of what someone has missed or should have implemented but didn't (eg webhooks), then this requires a lot more time and the developer basically has to sit down properly to work out exactly what should have been implemented / how they would have developed it, and then cross-check it against what has been done, otherwise you won't be able to easily find those logical holes which effectively are bugs, just not simple code bugs like a missing semicolon.

    • some_furry 3 months ago

      My day job is auditing cryptography. I'd probably be slower to find the root cause of a CSS bug than most of the folks that read HN. :3

  • lazyasciiart 3 months ago

    Presumably you understand that how long it takes to find bugs in code depends on the code. If not, then I hope you've read the code for Linux and SSL, etc.

    • some_furry 3 months ago

      Yes, of course it depends a lot on context.

      I've never had an incentive to read the Linux kernel code. I routinely find and disclosed cryptography library bugs, though usually mostly hobby projects like the "I thought it would be cool if there was a PHP implementation of GHASH" sort rather than like OpenSSL.

grumple 3 months ago

I find things wrong with virtually every nontrivial pull request when I’m the reviewer. Sometimes these are minor issues, but I spot bugs and edge cases all the time.

I see some comments about time. How long does a code review take? I can review hundreds of lines of code in a few minutes. It is much easier to review code than to write code imo, especially as you gain experience. For bigger efforts, get eyes on it throughout the process.

I’ve met a lot of developers who assume their code will just work right after they write it. They don’t test it, via code or manual qa. Then they act surprised when the stakeholder tells them it doesn’t work. Do the job right the first time. Slow is smooth and smooth is fast.

  • farmeroy 3 months ago

    I'm always surprised how often I get a pull request which either doesn't build or has failing unit tests or both. These are pretty easy to address at least - but when I think that certain code might be difficult to maintain, be an anti-pattern, or possibly present bugs in non-obvious ways, I find it really hard to effectively address those issues and often end up doubting my own suggestions

  • aitchnyu 3 months ago

    When I was working with Django, those who added code ran it in their systems. With lambda, which we mostly deploy and test in the cloud, people tend to dump code and leave.

mgreene 3 months ago

The paper's title is a bit provocative but I think the findings are interesting. Mainly around long-held beliefs about what developers perceive as the value vs what is actually happening.

You do bring up a good point about using change defect rate though. I wish the researchers had cited that as the preferred unit of measurement. I did some research on change defect rates on popular open source projects and it's all over the map. Ranging from ~12 - ~40% [1].

The future I'd like to see is as developers we use objective measures to justify time investment for review. This is going to be increasingly important as agents start banging out small bug-fix tickets.

[1] https://www.shepherdly.io/post/benchmarking-risk-quality-kpi...

sarchertech 3 months ago

I remember a time before you needed an approval to merge a PR (I also remember a time before PRs or any widespread version control system).

I can count on one hand the number of times someone has caught a bug in my code that should have stopped deployment. Not that I haven’t deployed serious bugs to deployment, but they’ve almost never been caught by someone reviewing my code.

Occasionally someone suggests a better way to do something, or asks a question that ends up with me coming up with a better way of doing something. But those situations are also rare. And I can’t think many times at all when the impact was worth the time spent on the process.

Pair programming and collaboration can be hugely beneficial, but the minimal effort PR approval culture we’ve developed is a very poor substitute.

  • dgb23 3 months ago

    Both code reviews and pair programming can be very useful if they serve a specific purpose.

    Getting someone up to speed with unfamiliar code, disentangling hairy code so it becomes clearer, hunting down bugs or finding unknown unknowns such as bugs or unnecessary complexity.

    However in many cases not looking at the screen when doing these kinds of things is more helpful. It's often more beneficial to build a mental model in your head and then riff off each other. Rather drawing things on a board or writing down stuff in a markdown file, explaining things in simple terms, than actually coding or reading actual code.

    Not sure if that still counts as pair programming or code reviewing but this free form way of talking about code is very effective.

  • wrsh07 3 months ago

    I've caught bugs in reviews, but even better I've requested tests and those tests have caught bugs

    Even a low effort code review can identify missing unittests

  • kqr 3 months ago

    It certainly sounds like you write seriously high-quality code! And judging from your profile, I'd be inclined to think you know what you are talking about. I'd like to ask a little more around your experience here. Do you mind sending an email to hn@xkqr.org so that I can ask some follow-ups?

sys_64738 3 months ago

There are various levels to code reviews. Code review tools that are web based are pretty poor in my experience. Anything more than a few lines across multiple files needs a cscope type tool.

Also what type of review? Is this a prototype needing a high level design review so that the actual review doesn’t turn into a design review? How often does that occur?

Who are the reviewers and what’s the process? Key stakeholders have more influence and you need to consider the reviewer’s experience, knowledge and credibility.

Finally how important is the code? Is it kernel code, or high execution daemon code needing race condition and memory leak checking? Are you using static analysis for the code? Does the code even compile and do what it is designed to do? Where are the unit test logs?

Lots to consider.

andirk 3 months ago

Technical debt. Keep it minimal, and when needed, write a task for it to be looked in to later.

Coding standards. Don't submit code that has rando extra lines and things that will slow down the next dev from looking in to the past to learn what does what.

And most of all, make sure edge cases are covered, often via a truth table of all possible outcomes.

I often comment on a PR saying "blah blah, but not blocking" so I'll allow it but at least my entitled opinion was known in case that code issue comes up later.

My PRs take a long time, because I dig the F in.

robertclaus 3 months ago

I find many standard processes can be described similarly - if you're mindful of what problem they solve, they should be incredibly useful. The tricks are the important but subtle details like not spending 2 hours straight reviewing an excessively long PR. Those are easy to forget once it's just part of the process.

OpenDrapery 3 months ago

I like to see working software alongside a PR. GitLab has the concept of a Review Environment, and it spoiled me. Ephemeral, dynamically provisioned environments that are deployed from a feature branch, are absolutely amazing productivity boosts.

It gives so much more context to a PR.

jonobird1 3 months ago

I'm not sure code reviews hold much merit. I've been a web developer for around 12 years and I've worked in companies big and small.

I think there should be a manual QA process to test the functionality of what the developer is pushing out.

The issue with code reviews is always that they take so much time for another developer and many devs are super busy so they just have a quick review of the PR and approve or feel they have to add some comments. Context switching between what the dev is already doing and having to come to the PR to review properly means they should switch to that Git branch, pull down the code, test it all and check for logical bugs that a static code review won't pick up.

For juniors, code reviews are still useful as you will be able to spot poor quality code, but for seniors, not as much for the reasons above, better off having a QA process to find and logic holes rather than expecting devs to invest so much time in context switching.

  • hakunin 3 months ago

    The problem here is not that developers are too busy, but that code reviews are considered second class citizens to churning out new code. It's like saying "many devs are super busy working on feature A so they just write quick and dirty code for feature B". If reviews are integral part of feature production pipeline, there should be no issue to sit down and spend a day reviewing code. For bigger, more complex things it could be a few rounds of reviews.

    There is an approximate non linear relationship between time it takes to produce the first PR and time it takes to go through all rounds of review. This time can be pretty reliably calculated and taken into account.

  • skywhopper 3 months ago

    Nah, automated testing cover basic functionality. For most PRs, a senior familiar with the code wouldn’t need to check it out and manually test anything, that’s not what “code review” is most of the time. If you need them to look at the code in a running state, that should be part of the CI process, not a manual task for the developer.

    A good reviewer can call out bad strategic coding decisions or misinterpretations of the requirements. QA is another layer of review entirely.

  • kqr 3 months ago

    I'm a little surprised to hear this. Would you mind sending an email to hn@xkqr.org so that I can ask some follow-up questions, please?

dakiol 3 months ago

The only thing I don’t like about code reviews are nitpick comments. Everyone has their own subjective way of writing code, if my code works and looks good enough, let it be.

alex_lav 3 months ago

Code reviews can find bugs.

More often, code reviews become opportunities for team members to bikeshed. Worse, an opportunity for a non-team member to exert power over a project.

  • spankalee 3 months ago

    Bikeshedding in a team can be good. If you're all painting the shed, it helps to agree on the color.

    More generally, code review is a great opportunity for incrementally gaining or encouraging alignment across the team.

    What the team chooses to align on and how strongly are left up to it, so hopefully they choose to not get bogged down in inconsequential details, but completely skipping the pretty cheap chance for reenforcing all kinds of cohesion would be a big mistake in my opinion.

    • alex_lav 3 months ago

      You're making a lot of positive-upsided assertions about code review. My point is there is too much opportunity for negative behavior. It's the same as everything in tech, in life, "It can be good if everyone does their part to keep it good". And yet, most don't.

      • spankalee 3 months ago

        "most don't" is a strong claim. In my experience, core review has been undoubtedly good. I would never run or join a company without it.

        I'm writing code solo for the moment, and code review is maybe the thing I miss the most.

        • alex_lav 3 months ago

          > "most don't" is a strong claim.

          I'm happy to be reasonable. I guess my greater feeling is that most devs aren't great at identifying when they should identify restraint. For the same reason that most devs are abysmal interviewers, I think devs forget that code review is ultimately a human endeavor. Give your average dev the smallest amount of power and not enough guardrails and legitimate silliness ensues.

          > I'm writing code solo for the moment, and code review is maybe the thing I miss the most.

          I feel as though "code review" is taking on too many meanings in this conversation. Code review in the form of a second (or more) qualified dev reading and commenting on code for the greater good? Obvious good. Code review in the form of github PRs at a non-FAANG company? Skip it. Kangaroo court.

      • sadops 3 months ago

        [flagged]

        • alex_lav 3 months ago

          Senior team members are usually the worst offenders.

  • 29athrowaway 3 months ago

    Code reviews can be an opportunity for sabotaging performance or for dominance. I have seen it numerous times.

    Conspiracies to delay code reviews for high performers in stacked ranking organizations is common.

    They can also be ruined by having the not rotating the reviewer role among eligible reviewers in the team, in that case, everything just represents the opinion of a specific group.

  • sadops 3 months ago

    From where I sit, it's usually the people writing the bugs who are so averse to code reviews.

    • alex_lav 3 months ago

      Everyone that writes software writes bugs.

sriharshamur 3 months ago

What are some amazing blogs/resources to read to learn how to review PRs?

bigcat12345678 3 months ago

Hah? Code review of cuz finds bugs... It's like people do see...

banish-m4 3 months ago

Eliminating bugs requires sustained, vigilant, holistic, overlapping approaches:

- Code reviews prior to acceptance of commits (Facebook does this)

- Refactoring crap that manages to get through

- Removing features

- Higher-order languages with less code

- Removal of tech debt

- More eyeballs

- Wiser engineers

- Dedicating more time to better engineering

- Coding guidelines that optimize for straightforward code while not being so strict as to hinder strategic exceptions

- Negative LoC as a KPI