refibrillator 8 days ago

> project management risks: “Lead developer hit by bus”

> software engineering risks: “The server may not scale to 1000 users”

> You should distinguish them because engineering techniques rarely solve management risks, and vice versa.

It's not so rare in my experience. Code quality and organization, tests and documentation, using standard and well known tools - all of those would help both sides here.

That's why I've had to invoke the "hit by bus" hypothetical so many times in my career with colleagues and bosses, because it's a forcing function for reproducible and understandable software.

Pro tip: use "win the lottery" instead to avoid the negative connotation of injury or death.

  • ajanuary 8 days ago

    > Pro tip: use "win the lottery" instead to avoid the negative connotation of injury or death.

    I very much appreciate the attempt to reframe it positively. But personally, if I win the lottery, I’m still doing a handover. The key thing with “hit by a bus” is, no matter what your personality, you don’t have any time to prepare. Thats why you’ve got to get the information out there today. Unfortunately I’ve yet to find a positive spin that has those same connotations.

    • DanHulton 7 days ago

      "Doing a handover" won't save a lot of companies. There's no level of "handover" that will save some of the shops I've been in, that just 100% rely on one team member to handle _everything._ Even if they took a _month_ to try to pass along along their accumulated knowledge, there's ingrained engineering practices and processes that all boil down to "Go ask Chris."

      Honestly, I'd say that if you're able to do a handover in two weeks, and that covers everything it needs to cover? Well, then probably the handover isn't _actually_ necessary, and anything you cover in those two weeks could probably be figured out by a reasonably competent teammate. Knowledge is rarely the issue, being a person-shaped load-bearing component for your team _is._

      (But also, good on you for taking a bit of time to make sure your old co-workers aren't left in the lurch. It probably also gives a bit of time to consider what you _really_ want to do with the winnings, so as not to blow it all in the first year like a lot of lottery winners do.)

      • chipdart 7 days ago

        > "Doing a handover" won't save a lot of companies. There's no level of "handover" that will save some of the shops I've been in, that just 100% rely on one team member to handle _everything._ Even if they took a _month_ to try to pass along along their accumulated knowledge, there's ingrained engineering practices and processes that all boil down to "Go ask Chris."

        That might very well be true, but to a manager having a handover means the problem of loss of institutional knowledge is solved, and any subsequent problem due to lack of context or loss of institutional knowledge is avoidable and caused by poor handovers.

    • devsda 8 days ago

      Depends on the other person, but "gets hit by a bus or abducted by aliens" or some other (movie) trope doesn't sound too bad if it's an informal discussion.

      • serial_dev 8 days ago

        I dislike the "hit by the bus" not because it hypothetically kills a team mate, but because it implies that if one of us dies, the key worry of the management is who left who knows how to deploy the foo-banana service. This phrase is a reminder that nobody would care if you died apart from the fact that only you know how to do x and y.

        "Abducted by an alien" is so absurd, it takes away all that and replaces it with something potentially fun and an experience of a lifetime.

        I also very much prefer the lottery example, and the fact that someone on HN would still do a handover doesn't change that fact.

        • JaumeGreen 7 days ago

          People would care at a personal level, but the show must go on.

          I had to take a sudden leave the week before, and my coworkers, and boss cared at a personal level. And while most of my tasks were taken care of by them some weren't, and I had to rush Monday morning when I got back.

          This talk is not about personal feelings, even though it could include "do a memorial for the lost person", but it's about all the needed work that must go on on any scenario.

          You should understand that people should keep going on their lifes once you are not there, and that is good. If they think kindly of you after your passing depends on your relationship during life, not on any company preparation scenario.

        • wegfawefgawefg 8 days ago

          Not that I am trying to make you upset, but I personally find this level of sensetivity in people annoying.

          Life and death metaphors are my right to enjoy and share irreverently as the mortal being I am.

          • serial_dev 7 days ago

            You can do whatever you want, I'm just sharing if my manager says "we won't know how to fix a bug in the xyz service if you were hit and killed by the bus", my response will be "I don't really give a f what happens at this stupid company after I die".

            Take it like this to understand that mine was aimed to be constructive criticism:

            Stop reminding the people who work at your stupid company, doing all your stupid scrum bs ceremonies that if they died and left the wife and children behind, you would be really worried about Tom needing two days to fix a bug in the iOS watch application, whereas it would take you only two hours. Again, you are free to do whatever you want, but if you keep reminding me that none of this bs what I'm doing here really matters to me, don't be surprised that I quit as soon as possible.

            • skeeter2020 7 days ago

              but the manager is obviously (unless they are very, very bad) using a metaphor (that you don't like). By responding literally, what you're really saying is "I don't really give a f what happens at this stupid company after I LEAVE MY CURRENT RESPONSIBILITIES" regardless of why. It sounds like you have extreme trust issues with your manager if they can't make a (pretty benign) verbal mis-step with you, without this sort of response, and your follow up suggests you're in a really bad space. I can't believe it's not visible and leaking into other aspects of your work and interactions.

              • serial_dev 7 days ago

                I'm okay now, thank you.

                I did have a previous workplace, though, where they couldn't stop yapping about the bus factor and I disliked that phrase because it kept reminding me that one day I die and am wasting one more hour of my living days in a pointless retrospective that will have no positive effect on anything.

            • lelanthran 7 days ago

              > You can do whatever you want, I'm just sharing if my manager says "we won't know how to fix a bug in the xyz service if you were hit and killed by the bus", my response will be "I don't really give a f what happens at this stupid company after I die".

              So? The company isn't going to let all their employees starve to death out of compassion for the one that did die.

              They know you don't give a fuck about what happens after you're dead, but they're still alive, and they have to keep things running so that they can continue eating.

              Telling people that you don't care what effect your death has on them is a pretty good way to indicate how selfish you are.

              • wegfawefgawefg 7 days ago

                I agree with you. But if you hate the poduct, and you hate scrumlords, and you hate your job, and you think the world would be better off if everyone at your company went on to spend their life working on something else.

                It may not be selfish to feel an emotion giving you a background hint about reality. There are hungry children in the world, Im not selfish just because I dont want to eat fermented soybeans.

          • arcanemachiner 7 days ago

            I died reading this comment.

            • bombela 7 days ago

              But did you hand over your knowledge first?

          • mnsc 7 days ago

            So you would be OK if I used "abducted and gang raped for so long that you come out seemingly alive and physically intact but your mind has broke and you enter a catatonic state and can't do a handover" as an example? Because if you are not one of those "sensitive people" you should be able to focus on the "can't do a handover" part of the scenario and not get caught up in the gruesome part?

            And before you ask, yes I enjoy being overly graphic like this on the pseudonumoyous internet to exaggerate my points but I wouldn't do this irl. That is hypocritical of me.

            • mirekrusin 7 days ago

              Depends if aliens were attractive?

              • mnsc 7 days ago

                Yeah I realize that "abducted" could suggest aliens, I was thinking more along the lines of Mexican drug cartel style abduction where your raping and being left alive is a wake up call to your partner in the police force that has turned the wrong stones. And the dudes doing the raping is not pretty. But the point is you are not able to do a proper handover, remember?

                • mirekrusin 7 days ago

                  Why do you feel the need to go into details? "bus factor" is well known concept, it doesn't focus on details how the head disintegrates etc.

                  • mnsc 7 days ago

                    See my other reply...

                • skeeter2020 7 days ago

                  The fact that you only respond like this anonymously should answer your original question/not-a-question.

                  from a more practical perspective: No, because you're replacing a well-understood metaphor with something that is unknown, juvenile and stupid, removing the value we get from shared language constructs.

                  • mnsc 7 days ago

                    By exaggerating the example and moving out of shared constructs I was trying to get wegfaw... to see how it is to be "annoyingly sensitive". That even though "hit by a bus" is "well-understood", it could still invoke very graphic memories if for example someone actually has lost someone in a bus accident. Sometimes words are not just words.

                    But i could imagine someone who states that it's his "right" to be as irreverent as he likes to whomever he wants to is not open to the concept of other people's perspective.

                    And just to be clear, I wasn't saying that you should use my example instead. :D

                    • wegfawefgawefg 5 days ago

                      I am open to other perspectives, but that does not mean I must endorse all of them.

                      If it makes you feel any better I have given your perspective what I consider to be a fair consideration.

                      It just happens to violate my reward function.

                      • mnsc 4 days ago

                        Well that's all you can do and in that case I misjudged you, sorry about that.

            • wegfawefgawefg 5 days ago

              To be honest if you said that in a meeting I would laugh and immediatly want to make friends with you.

              For me this sort of signaling isn't a sign of aggressive anticooperation, but a sign to me you see through bullshit, will point out wrongdoings, and will be candid. It doesn't evoke disgust or discomfort. It feels honest and friendly.

              In the opposite case, bussiness casual english feels cold and disingenuous to me. Like reading apologies by GPT. I feel it as annoying insincerity. (I give a pass to anyone over 40, and usually find in private they are actually human after all.)

              Different strokes.

            • coldtea 7 days ago

              Well, before everybody got sensitive I'd be OK with that too. It's not even very different from real world teasing talk and examples that were used by actual dev teams before the cult of HR grew.

              And of course it's a strawman exaggerated version of the common "hit by a bus" idiom.

        • rrr_oh_man 7 days ago

          > it implies that if one of us dies, the key worry of the management is who left who knows how to deploy the foo-banana service

          Story from this week:

          Manager at large-ish IT support company, visibly distraught, tells their manager that one of her direct reports got cancer.

          Response: "Oh, and I thought something bad happened, like the direct report quit."

          • manfre 7 days ago

            Sorry you have to work with a toxic manager.

            • jpc0 7 days ago

              Got cancer ranges from will be out of office a few times a month for a few months to a year through to won't be here next month.

              You are reading a lot out of words read on an internet screen. This manager may very well be toxic, they may also just not be able to express emotion in the same way and very well went above and beyond putting everything in place for said employee to receive all the medical support they need, mental and physical.

              We don't know and for all we know OP is in a bad space and completely misread the situation.

              This whole comment is devil's advocate, you can make the same argument for the exact opposite and of course your personal experience matters but leave the airing of dirty laundry out of public forums and if I do the same I encourage anyone to call me out on it.

        • Ma8ee 8 days ago

          > the key worry of the management is who left who knows how to deploy the foo-banana service.

          While it might be uncomfortable to be reminded of, it is in most cases true.

          The company isn’t your family or your friends, and most managers adhere to the doctrine that the single important responsibility of the company is to create value for the stock holders.

        • coldtea 7 days ago

          >because it implies that if one of us dies, the key worry of the management is who left who knows how to deploy the foo-banana service.

          That will exactly be the key worry of the management.

          We've had team members lost like that - and that was the worry.

          They'll still feel sorry, say their condolensces, and might even go to the funeral, but in the end, the work continues - almost immediately.

        • bluGill 7 days ago

          if there is a backup plan then management can care about your death. If there isn't one and you die they can't care or enough to even attend your funeral as neeping things running will consume them.

        • tome 7 days ago

          > This phrase is a reminder that nobody would care if you died apart from the fact that only you know how to do x and y.

          An alternative interpretation is "we have to keep doing x and y to earn our living, and it's doubly difficult because not only did you know how to do them but we're all having difficulty coping with your passing".

        • yakshaving_jgt 8 days ago

          > This phrase is a reminder that nobody would care if you died

          I don’t agree that the phrase implies that at all.

    • RadiozRadioz 7 days ago

      I don't like "win the lottery" because it implies that the sole reason for the person staying is money; it devalues their loyalty to the team.

      Everyone's working for money in reality of course, but to me it doesn't feel team-spirited to call it out.

    • cryptonector 8 days ago

      "Key person risk" captures all the possibilities.

    • coldtea 7 days ago

      >But personally, if I win the lottery, I’m still doing a handover.

      How much time are you devoting to it? Enough for the project to ship, which could take months or a year, or merely enough to hastily give some other devs a breakdown of the thing, and "so long suckers" / hope for the best?

      And, while you might (still do a handover), would others?

    • euroderf 6 days ago

      Good documentation will commit as much as possible to paper: to turn the implicit into the explicit. But then ya gotta keep it up to date.

    • arendtio 8 days ago

      > But personally, if I win the lottery, I’m still doing a handover.

      I wonder why we talk about such rare events. People get sick, and burnout is very common, too. Some people look for a new job, and when they find it, they call in sick just to not have to handle all the stress of their old job.

      There are plenty of more likely examples of not having a handover.

    • ijidak 7 days ago

      - Kidnapped to paradise

      - Abducted by beautiful aliens

  • Scarblac 7 days ago

    It's happened twice now during my career that important colleagues were literally hit by buses.

    And both were back to work in about a week.

    I need a different standard example disaster.

  • gofreddygo 7 days ago

      > Pro tip: use "win the lottery" instead to avoid the negative connotation of injury or death.
    
    Win the lottery is still quite an euphemism for a more common outcome - getting laid off.

    I use "the next person" more often to drive the point.

    And there's the worse situation of "burnout" where you still have headcount but they've mentally checked out.

  • skeeter2020 7 days ago

    How about "takes a 3 week vacation"? I've seen lots of companies that can't survive that let alone a permanent departure. Or "increase bus count" which refers to removing a single point of failure and focuses on the motivation vs the cause of why you're in trouble. If you were doing root cause analysis you shouldn't stop at "because Larry got hit by a bus / won the lottery" because that's not the issue.

  • ollien 8 days ago

    I've heard that positive spin responded to with "this company is my biggest investment, I'm not going anywhere" :)

  • rixed 7 days ago

    > using standard and well known tools - all of those would help both sides here.

    What if standard tools become unnecessarily complicated and are poorly designed ?

  • foooorsyth 8 days ago

    “Win the lottery” doesn’t really work in tech. So many mid career engineers are deca-millionaires with 15-20 years of grants and refreshers. They’re working because they enjoy the work.

    Even at “regular” companies that don’t have hockey stick stock graphs, lots of the senior engineering staff are quite well off by their 40s. One of my division leads sold his last startup for 10s of millions of dollars. He still send emails late on Friday nights.

    • IshKebab 8 days ago

      Maybe in silicon valley. Not in the vast majority of the tech world.

breadwinner 8 days ago

Architecture for architecture's sake is the worst thing you can do because it unnecessarily increases complexity.

The ultimate goal of good architecture is cost reduction. If your architecture is causing you to spend more time developing and maintaining code then the architecture has failed.

  • nine_k 8 days ago

    Some architectures are very cheap to implement initially but are more expensive to maintain and evolve. Some are the other way around, require higher upfront expenses but allows to operate and evolve the product easier.

    It's always a balancing act. The corollary us that there is no one right architecture, the choice depends on circumstances, and should sometimes be re-evaluated.

    Another corollary is that flexibility is extra useful, because it allows to evolve and tweak the architecture to.some extent, maintaining its efficiency in changing circumstances.

    • serial_dev 8 days ago

      It's not always a balancing act, it implies that everything has a good side and you just need to know what to focus on.

      It's not true, there are architectures that are a waste of time today, and a waste of time in 5 years, too.

      • SgtBastard 7 days ago

        If that architecture has no value in any circumstances at the beginning of a project and no value once the code base has matured, it’s an anti pattern.

        • metaltyphoon 7 days ago

          Most devs / architects which initially designed the system then to not stick around and find out if what they did actually worked.

    • flowerlad 8 days ago

      Over engineering is the result when you focus too much on “future” needs.

      • sverhagen 8 days ago

        And under-engineering is the result when you don't sit down at the beginning of the project and plan for the known scope and requirements of that project. They are both bad ways to go about it. So, like the title says: "just enough", so not more, but also not less.

        It also doesn't always have to be a dedicated architect in a proverbial ivory tower. It can be an informal meeting of developers around a whiteboard. Don't get scared by the word "architecture".

        • mainde 7 days ago

          While I agree that under-engineering can be a problem, it's generally very easy to fix by doing what's now clearly identified as missing/needed. Fixing over-engineering is always a nightmare and rarely a success.

          IHMO the experience, the pragmatism and the soft skills needed for a successful/productive informal architectural meeting are too rare for this solution to work consistently.

          Personally I've abandoned all hope and interest in software architecture, while on paper it makes a lot of sense, in practice (at least in what I do) it just enables way too many people to throw imaginary problems in the mix, distracting everyone from what matters with something that may eventually be a concern once/if a system hits a top-1% level of criticality/scale.

          • gfairbanks 7 days ago

            > throw imaginary problems in the mix

            Yes, this happens too easily. It's the crux of Ward Cunningham's original observation on tech debt discussed recently [1]. He basically said: all of you thinking you can use waterfall to figure it all out up front are deluded. By getting started right away, you make mistakes and you avoid working on non-problems. I can fix the mistakes with refactoring but you can't ever get your time back.

            Most teams live in his world now. Few do too much up-front design, most suffer from piled up tech debt.

            I hope you give architecture another chance. Focus on the abstractions themselves [2] and divorce that from the process and team roles [3].

            [1] https://news.ycombinator.com/item?id=40616966#40624446

            [2] Software Architecture is a Set of Abstractions, George Fairbanks, IEEE Software July 2023. https://www.computer.org/csdl/magazine/so/2023/04/10176187/1...

            [3] JESA section 1.5, https://www.georgefairbanks.com/assets/jesa/Just_Enough_Soft... "Job titles, development processes, and engineering artifacts are separable, so it is important to avoid conflating the job title “architect,” the process of architecting a system, and the engineering artifact that is the software architecture."

      • gfairbanks 7 days ago

        In my own gut, I have a sense of the right amount of time to spend on design. Assume (falsely) for a moment that I'm right: How can I transfer that gut sense to you or anyone? Chapter 3 of the book is my attempt to share that gut feel [3].

        Even if you clone us, our gut feel doesn't transfer to our clones. So, we can recognize under- and over-engineering in our guts, but how can we help someone else? Is there a better way than the risk-driven model?

          1. Identify and prioritize risks
          2. Select and apply a set of techniques
          3. Evaluate risk reduction
        
        [1] JESA Chapter 3: Risk-Driven Model. https://www.georgefairbanks.com/assets/jesa/Just_Enough_Soft...
    • j45 7 days ago

      Pre-mature optimization and over-engineering are a greater issue than the opposite.

      Generally, clever architecture at the right stages can beat clever code and leave flexibility to pivot.

  • javaunsafe2019 8 days ago

    The ultimate goal or SA is to fulfil the quality goals. Cost reduction can be one.

    • CraigJPerry 8 days ago

      The goal of software architecture is

      >> cost reduction

      > fulfil the quality goals

      I’d word it as “keeping the cost of changing software low over the long term”.

      I don’t think you can reduce that to a quality goal of “modifiability” because it’s not negotiable like other quality goals.

      I don’t think you can say it’s just cost reduction, but that is closer.

      It’s an existential thing. Architecture is about retaining the ability to change your software (avoiding the big ball of mud). If you lose the ability to change within time and resource constraints then the project, product or startup is dead.

      • gfairbanks 7 days ago

        For typical web / IT systems I largely agree with focusing on modifiability as a heuristic because on those kinds of systems it's typically the biggest risk.

        But, have you seen this kind of mistake / failure? A system is built so flexibly that it can handle all kinds of future needs, but it's slow. Maybe it's a shopping cart that takes 5 seconds to update. So start with modifiability as the primary heuristic but keep an eye out for other failure risks.

        Meta-commentary:

        This is an example of why it's so hard to discuss architecture. My book talks about "failure risks", which is pretty abstract or generic. There's no easy heuristic for avoiding "failure risks" like there is for web / IT systems.

        Software architecture is a discipline that's bigger than just web / IT systems. Some systems must respond with X milliseconds, otherwise the result is useless, so the architecture should make that possible -- and preferably make it easy.

    • flowerlad 8 days ago

      Good architecture reduces cost of achieving quality.

    • banish-m4 8 days ago

      The category is nonfunctional requirements.

      • gfairbanks 7 days ago

        It's often taught as "nonfunctional requirements" or NFRs. The architecture community says "quality attributes". Why?

        1) Not all qualities are requirements. Requirements tend to be pass/fail, either you meet them or you don't. Latency is a quality and typically lower is better, not pass/fail (though sometimes it is).

        2) "Nonfunctional" in other contexts means broken. If you saw a machine with a sign on it saying "nonfunctional" what would you conclude?

        At one point I tried to find the origin of the term "quality attributes". It's way older than the software community. I found it being used in the 1960's by the US National Academy of Sciences. If anyone knows the origin I'm interested in learning more.

  • gfairbanks 7 days ago

    How much architecture is enough? Chapter 3 Risk-Driven Model [1] guides you to do as little architecture as possible. It says:

    "The risk-driven model guides developers to apply a minimal set of architecture techniques to reduce their most pressing risks. It suggests a relentless questioning process: “What are my risks? What are the best techniques to reduce them? Is the risk mitigated and can I start (or resume) coding?” The risk-driven model can be summarized in three steps:

      1. Identify and prioritize risks
      2. Select and apply a set of techniques
      3. Evaluate risk reduction
    
    You do not want to waste time on low-impact techniques, nor do you want to ignore project-threatening risks. You want to build successful systems by taking a path that spends your time most effectively. That means addressing risks by applying architecture and design techniques but only when they are motivated by risks."

    An example of "architecture" is using the Client-Server style, where servers never take initiative and simply respond to client requests. That might be a good or bad fit to the problem.

    [1] https://www.georgefairbanks.com/assets/jesa/Just_Enough_Soft...

  • banish-m4 8 days ago

    Big grand architecture invariably leads to an elitism culture where there are overpaid technical architects who don't do much but insist on terrible patterns for software engineers to solve under unreasonable constraints including deadlines.

    • gfairbanks 7 days ago

      One of my main goals with the book was to "democratize" architecture, to make it accessible and relevant to every developer. As the cover blurb says:

      "It democratizes architecture. You may have software architects at your organization — indeed, you may be one of them. Every architect I have met wishes that all developers understood architecture. They complain that developers do not understand why constraints exist and how seemingly small changes can affect a system’s properties. This book seeks to make architecture relevant to all software developers, not just architects."

      In hindsight, my book hasn't been very good at that but perhaps it was a stepping stone on the path. Michael Keeling's book Design It: From Programmer to Software Architect does a better job of saying how developers can engage with architecture ideas. I'm a personal friend of his and a huge fan of his work. His experience report [2] on how to democratize architecture is what I aspire to do.

      [1] Michael Keeling, Design It: From Programmer to Software Architect, https://pragprog.com/titles/mkdsa/design-it/

      [2] Keeling and Runde, Agile2018 Conference. Share the Load: Distribute Design Authority with Architecture Decision Records. https://web.archive.org/web/20210513150449/https://www.agile...

  • jefffoster 7 days ago

    Not just cost reduction but allowing more investment. A good architecture can enable more people to work on your product.

    • gfairbanks 7 days ago

      Agreed. See: Scale Your Team Horizontally [1].

      "I’m not ready to argue against Brooks’ Law that adding people to a late project makes it later. But today, when developers are working on a clean codebase, I see lots of work happening in parallel with tool support to facilitate coordination. When things are going smoothly, it’s because the architecture is largely set, the design patterns provide guidance for most issues that arise, and the code itself (with README files alongside) allow developers to answer their own questions."

      [1] Scale Your Team Horizontally, George Fairbanks, IEEE Software July 2019. https://www.georgefairbanks.com/ieee-software-v36-n4-july-20...

pbnjay 8 days ago

Published in 2010? Curious how much of it has survived since then?

I like “Design It” because of some of the workshop/activities that are nice for technical folks who need to interact with stakeholders/clients (I’m in a consulting role so this is more relevant). Also it doesn’t lean hard on specific technical architectural styles which change so frequently…

  • jeremyjh 8 days ago

    I can't think of many things that have changed in architecture since 2010. I'm not talking about fads but about actual principles.

    • kqr 7 days ago

      Since 1970, to be fair... The people at the NATO Software Engineering conferences of '68 and '69 knew quite a bit about architecture. Parnas, my house-god in the area, published his best stuff in the 1970s.

    • zmmmmm 8 days ago

      probably containerisation is a big one, and also serverless computing

      they aren't principles as such, but they certainly play into what is important and how you apply them

    • pbnjay 8 days ago

      I mean shared hosting certainly existed but "the cloud" as we think of it today was much simpler and not nearly as ubiquitous. It doesn't really change the principles themselves but it certainly affects aspects of the risk calculus that dominates the table of contents.

    • CuriouslyC 8 days ago

      Things are changing now, pretty fast. The architecture that is optimal for humans is not the same architecture that is optimal for AI. AI wants shallow monoliths built around function composition using a library of helper modules and simple services with dependency injection.

      • wizzwizz4 8 days ago

        > I'm not talking about fads but about actual principles.

        Most problems are not well-addressed by shallow monoliths made of glue code. It's irrelevant what "AI wants", just as it's irrelevant what blockchain "wants".

        • cqqxo4zV46cp 8 days ago

          This response is entirely tribalist and ignores the differences between LLMs and ‘blockchain’ as actual technologies. To be blunt, I find it hard to professionally respect anyone that buys into these culture wars to the point where it completely overtakes their ability to objectively evaluate technologies. This isn’t me saying that anyone that has written off LLMs is an idiot. But to equate these two technologies in this context makes absolutely no sense to me just from a logical perspective. I.e. not involving a value judgment toward either blockchain or LLMs. The only reason you’re invoking blockchain here is because the blockchain and LLM fads are often compared / equated in these conversations. Nobody has suggested that blockchain technology be used to assist with the development of software in the way that LLMs are. It simply doesn’t make sense. These are two entirely separate technologies setting out to solve two entirely orthogonal problems. The argument is completely nonsensical.

          • wizzwizz4 5 days ago

            (A less snarky reply:) LLMs and blockchains are both special-purpose tools that are almost completely useless for their best-known applications (virtual-assistants and cryptocurrency, respectively). The social behaviour surrounding them is way more relevant than the actual technologies, and I don't think it's tribalistic to acknowledge that.

            People tried to use both as databases, put both in cars, invest in both. The vast majority of claims people make about them are just not evidenced, yet their hypist-adherents are so confident that they're willing to show you evidence that contradicts their claims, and call it "proof".

            Yes, the actual technologies are very different. But nobody is actually paying attention to the technologies (an ignorance that my other comment snarkily accuses you of displaying here – I probably should've been kinder).

          • wizzwizz4 7 days ago

            > Nobody has suggested that blockchain technology be used to assist with the development of software in the way that LLMs are. It simply doesn’t make sense.

            Linus Torvalds is a strong advocate. He even wrote a blockchain-based source code management system, which he dubbed “the information manager from hell”[0], spending over three months on it (six months, by his own account) before handing it over to others to maintain.

            People complain that this “information manager” system is hard to understand, but it's actively used (alongside email) for coordinating the Linux kernel project. Some say it's crucial to Linux's continued success, or even that it's more important than Linux.

            [0]: see commit e83c5163316f89bfbde7d9ab23ca2e25604af290

        • CuriouslyC 8 days ago

          If you think development velocity doesn't matter, you should talk to the people who employ you.

          • loup-vaillant 8 days ago

            If you think AI helps speed up development…

            • wizzwizz4 8 days ago

              AI does help speed up development! It lets you completely skip the "begin to understand the requirements" and "work out what's sensible to build" steps, and you can get through the "type out some code" parts even faster than copy-pasting from Stack Overflow (at only 10× the resource expenditure, if we ignore training costs!).

              It does make the last step ("have a piece of software that's fit-for-purpose") a bit harder; but that's a price you should be willing to pay for velocity.

              • jeffreygoesto 7 days ago

                Poor code monkeys. I am in a industry where software bugs can severely harm people since over 20 years and the fastest code never survived. It always only solved some cheap and easy 70% of the job and the remaining errors almost killed the project and everything had to be reworked peoperly. Slow is smooth and smooth is fast. "Fast" code costs you four times: write it, discuss why it is broken, remove it, rewrite it.

            • CuriouslyC 8 days ago

              I don't have to think, people have done research.

              • grugagag 8 days ago

                Only time will tell. Right now this sounds like everything that was once claimed, each technological cycle, only to be forgotten about after some time. Only after some time we come to our senses, some things simply stick while other ‘evolve’ in other directions (for the lack of a better word).

                Maybe this time it’s different, maybe it’s not. Time will tell.

                • saghm 8 days ago

                  While I don't disagree with you (and tend to be more of an AI skeptic than enthusiast, especially when it comes to being used for programming), this does weaken the earlier assertion that AI was brought up in response to; "things that have changed in architecture since 2010" is a lot more narrow if you rule out anything that's only come about in the past couple of years by definition due to not having been around long enough to prove longevity.

              • PaulDavisThe1st 8 days ago

                It is sad, and confusing, to read comments like this on HN.

                I mean, you're not even wrong.

            • cqqxo4zV46cp 8 days ago

              Please tell me about how you once asked ChatGPT to write something for you, saw a mistake in its output, and immediately made your mind up.

              I’ve been writing code professionally for a decade. I’ve led the development of production-grade systems. I understand architecture. I’m no idiot. I use Copilot. It’s regularly helpful. and saves time. Do you have a counter-argument that doesn’t involve some sort of thinly veiled “but you’re an idiot and I’m just a better developer than you”?

              I don’t by any means think that a current generation LLM can do everything that a software developer can. Far from it. But that’s not what we are talking about.

              • viraptor 8 days ago

                We'll need some well researched study on how much LLMs actually help vs not. I know they can be useful in some situations, but it also sometimes takes a few days away from it to realise the negative impacts. Like the "copilot pause" coined by Primogen - you know the completion is coming, so you pause when writing the trivial thing you knew how to do anyway and wait for the completion (which may or may not be correct, wasting both time and opportunity to practice on your own). Self-reported improvement will be biased by impression and facts other than the actual outcome.

                It's not that I don't believe your experience specifically. I don't believe either side in this case knows the real industry-wide average improvement until someone really measures it.

                • fragmede 7 days ago

                  Unfortunately, we still don't have great metrics for developer productivity, other than the hilari-bad lines of code metric. Jira tickets, sprints, points, t-shirt sizes; all of that is to try and bring something measurable to the table, but everyone knows it's really fuzzy.

                  What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question.

                  There are definitely ratholes to get stuck and lose time in when trying to get the LLM to give the right answer, but LLM-unassisted programming has the same problem. When using an LLM to help, there's a bunch of different contexts I don't have to load in because the LLM is handling it giving me more head space to think about the bigger problems at hand.

                  No matter what a study says, as soon as it comes out, it's going to get picked apart because people aren't going to believe the results, no matter what the results say.

                  This shit's not properly measurable like in a hard science so you're going to have to settle for subjective opinions. If you want to make it a competition, how would you rank John Carmack, Linus Torvalds, Grace Hopper, and Fabrice Bellard? How do you even try and make that comparison? How do you measure and compare something you don't have a ruler for?

                  • viraptor 7 days ago

                    > that ChatGPT can finish a leetcode problem before I've even fully parsed the question.

                    This is an interesting case for two reasons. One is that leetcode is for distilled elementary problems known in CS - given all CS papers or even blogs at disposal, you should be able to solve them all by pattern matching the solution. Real work is anything but that - the elementary problems have solutions in libraries, but everything in between is complicated and messy and requires handling the unexpected/underdefined cases. The second reason is that leetcode problems are fully specified in a concise description with an example and no outside parameters. Just spending the time to define your problem to that level for the LLM is likely getting you more than halfway to the solution. And that kind of detailed spec really takes time to create.

                  • jerf 7 days ago

                    "What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question."

                    You have to watch out for that, that's an AI evaluation trap. Leetcode problems are in the training set.

                    I'm reminded of people excitedly discussing how GPT-2 "solved" the 10 pounds of feathers versus 10 pounds of lead problem... of course, it did, that's literally in the training set. GPT-2 could be easily fooled by changing any aspect of the problem to something it did not expect. Later ones less so though last I tried a few months ago while they got it right more often then wrong they could still be pretty easily tripped up.

                    • fragmede 6 days ago

                      What that is though, is an LLM-usefulness trap. Yeah, the leetcode problem is only solved by the LLM because it's in the training data, and you can trick the LLM with some logic puzzle that's also difficult for dumb humans. But that doesn't stop it from being useful and outputting code that seems to save time.

                      • loup-vaillant 5 days ago

                        Even if it works and saves time, it may make us pay that time back when it doesn’t work. Then we to actually think for ourselves, but we’ve been dulled. Best case, we lose time on those cases. More realistically we let bugs through. Worst case, our minds, dulled by the lack of daily training, are no longer capable of solving the problem at all, and we have to train all over again until we can… possibly until we’re fired or the project is cancelled.

                        Most likely though, code quality will suffer. I have a friend who observes what people commit every day, and some of them (apparently plural) copy & paste answers from an LLM and commit it before checking that it even compiles. And even when it works, it’s often so convoluted there’s no way it could pass any code review. Sure if you’re not an idiot you wouldn’t do that, but some idiots use LLMs to get through interviews (it sometimes works for remote assignments or quizzes), and spotting them on the job sometimes takes some time.

                        LLMs for coding are definitely useful. And harmful. How much I don’t know, though I doubt right now that the pros outweigh the cons. Good news is though, as we figure out the good uses and avoid the bad ones, it should gradually shift towards "more useful than not" over time. Or at least, "less harmful than it was".

                        • fragmede 3 days ago

                          That's one possibility. The other direction is that it takes the dull parts out of the job, so I'm no longer spending cycles on dumbass shit like formatting json properly, so that my mind can stay focused on problems bigger than if there should be a comma at the end of a line or not. Best case, our minds, freed from the drugony of tabs vs spaces, are sharpened by being able to focus on the important parts of the problem rather than than dumb parts.

                        • wizzwizz4 5 days ago

                          > some of them (apparently plural) copy & paste answers from an LLM and commit it before checking that it even compiles.

                          If I were using one of these things, that's what I'd do. (Preferably rewriting the commit to read Author: AcmeBot, Committer: wizzwizz4) It's important that commit history accurately reflect the development process.

                          Now, pushing an untested commit? No no no. (Well, maybe, but only ever for backup purposes: never in a branch I shared with others.)

              • albedoa 7 days ago

                > Do you have a counter-argument that doesn’t involve some sort of thinly veiled “but you’re an idiot and I’m just a better developer than you”?

                Requiring that the counter-argument reaches a higher bar than both your and the original argument is...definitely a look!

      • jerf 7 days ago

        Nobody has ten years of experience with a code base "optimized for AI" to be able to state such a thing so confidently.

        And nobody ever will, because in 10 years, coding AIs will not look like they do now. Right now they are just incapable of architecture, which your supposed optimal approach seems to be optimizing for, but I wouldn't care to guarantee that's the case in 10 years. If nothing else, there will certainly be other relevant changes. And you'll need experience to determine how best to use those, unless they just get so good they take care of that too.

      • wolfgang42 8 days ago

        I don’t know about this architecture for AI, but your description sounds like the explanations I’ve heard of the Ruby on Rails philosophy, which is clearly considered optimal by at least some humans.

      • afro88 8 days ago

        Maybe this post wasn't the right one for your comment, hence the downvotes.

        But I find it intriguing. Do you mean architecting software to allow LLMs to be able to modify and extend it? Having more of the overall picture in one place (shallow monoliths) and lots of helper funtions and modules to keep code length down? Ie, optimising for the input and output context windows?

        • CuriouslyC 7 days ago

          LLMs are very good at first order coding. So, writing a function, either from scratch or by composing functions given their names/definitions. When you start to ask it to do second or higher order coding (crossing service boundaries, deep code structures, recursive functions) it falls over pretty hard. Additionally, you have to consider the time it takes an engineer to populate the context when using the LLM and the time it takes them to verify the output.

          LLMs can unlock incredibly development velocity. For things like creating utility or helper functions and their unit tests at the same time, an engineer using a LLM will easily 10x an equally skilled engineer not using a LLM. The key is to architect your system so that as much of it as possible can be treated this way, while not making it indecipherable for humans.

          • hollerith 7 days ago

            >while not making it indecipherable for humans

            This is a temporary constraint. Soon the maintenance programmers will use an AI to tell them what the code says.

            The AI might not reliably be able to do that unless it is in the same "family" of AIs that wrote the code. In other words, analogous to the situation today where choice of programming language has strategic consequences, choice of AI "family" with which to start a project will tend to have strategic consequences.

      • rednafi 8 days ago

        This whole part sounds like BS mumbo jumbo. AI isn’t developing any system anytime soon and people surely aren’t going to design systems that cater to the current versions of LLMs.

        • dcow 8 days ago

          Have you heard of modular, mojo, and max?

          • viraptor 8 days ago

            They're designed for fast math and python similarity in general. Llama.cpp on the other hand is designed for LLM as we use it right now. But Mojo is general purpose enough to support many other "fast Python" use cases and if we completely change the architecture of LLMs, it's still going to be great for them.

            It's more of a generic system with attention on performance of specific application rather than a system designed to cater to current LLMs.

            • dcow 8 days ago

              No. Max is an entire compute platform designed around deploying LLMs at scale. And Mojo takes a Python syntax (it’s a superset) but reimplements the entire compiler so you (or the compiler on your behalf) can target all the new AI compute hardware that’s almost literally popped up overnight. Modular is the company that raised 130MM dollars in under 2 years to make these two plays happen. And Nvidia is on fire right now. I can assure you without a sliver of a doubt that humans are most certainly redesigning entire computing hardware and the systems atop to accommodate AI. Look at the WWDC Keynote this year if you need more evidence.

              • viraptor 7 days ago

                Sure it's made to accommodate AI or more generally fast vector/matrix math. But the original claim was about "people surely aren’t going to design systems that cater to the current versions of LLMs." Those solutions are way more generic than current or future versions of LLMs. Once LLMs die down a bit, the same setups will be used for large scale ML/research unrelated to languages.

        • cqqxo4zV46cp 8 days ago

          What? The entire point of the comment you’re replying to is that the LLM isn’t designing the system. That’s why it’s being discussed in the first place. LLMs certainly currently play a PART in the ongoing development of myriad projects, as made evident by Copilot’s popularity to say the least. That doesn’t mean that an LLM can do everything a software developer can, or whatever other moving goalpost arguments people tend to use. They simply play a part. It doesn’t seem outside of the realm of reason for a particularly ‘innovative’ large-scale software shop to at least consider taking LLMs into account in their architecture.

          • CuriouslyC 7 days ago

            The skeptics in this thread have watched LLMS flail trying to produce correct code with their long imperative functions, microservices and magic variables and assumed that their architecture is good and LLMs are bad. They don't realize that there are people 5xing their velocity _with unit tests and documentation_ because they designed their systems to play to the strengths of LLMs.

      • ysofunny 8 days ago

        AI wanted you to write code in GoLang so that it could absorb your skills more faster. kthanksbai

  • zeroCalories 8 days ago

    Process at my work is heavily influenced by this book, and I think it gives a pretty good overview of architecture and development processes. Author spends a lot of time in prose talking about mindset, and it's light on concrete skills, but it does provide references for further reading.

  • gfairbanks 7 days ago

    Keeling's Design It book is great [1]. It helps teams engage with architecture ideas with concrete activities that end up illuminating what's important. My book tries to address those big ideas head-on, which turns out to be difficult, pedagogically, because it's such an abstract topic.

    Which ideas have survived since 2010?

    Some operating systems are microkernels, others are monolithic. Some databases are relational, others are document-centric. Some applications are client-server, others are peer-to-peer. These distinctions are probably eternal and if you come back in 100 years you may find systems with those designs even though Windows, Oracle, and Salesforce are long-gone examples. And we'll still be talking about qualities like modifiability and latency.

    The field of software architecture is about identifying these eternal abstractions. See [2] for a compact description.

    "ABSTRACT: Software architecture is a set of abstractions that helps you reason about the software you plan to build, or have already built. Our field has had small abstractions for a long time now, but it has taken decades to accumulate larger abstractions, including quality attributes, information hiding, components and connectors, multiple views, and architectural styles. When we design systems, we weave these abstractions together, preserving a chain of intentionality, so that the systems we design do what we want. Twenty years ago, in this magazine, Martin Fowler published the influential essay “Who Needs an Architect?” It’s time for developers to take another look at software architecture and see it as a set of abstractions that helps them reason about software."

    [1] Michael Keeling, Design It: From Programmer to Software Architect, https://pragprog.com/titles/mkdsa/design-it/

    [2] George Fairbanks, Software Architecture is a Set of Abstractions Jul 2023. https://www.computer.org/csdl/magazine/so/2023/04/10176187/1...

rowls66 8 days ago

I found 'A Philosophy of Software Design' by John Ousterhout to be useful. It contains alot of solid easy to understand advice with many examples.

  • gchaincl 8 days ago

    Great book, I've learnt a lot from it

evmar 8 days ago

I don't know this book in particular, but I know the author from their writing about "Intellectual Control", which is extremely insightful:

https://www.georgefairbanks.com/ieee-software-v37-n3-may-202...

b1ld5ch1rm5por7 7 days ago

Within my prior company they circulated this "Software Architecture for Developers" book by Simon Brown: https://leanpub.com/b/software-architecture.

It's still on my reading list, though. I've moved on from that company. But it came highly recommended, because they also documented their architecture with the C4 model.

Has anyone here read it?

xhevahir 8 days ago

"Risk-dependent" would be a much better name for this methodology. (Why are programmers so fond of this "[X]-driven" phrase?)

  • canadaduane 8 days ago

    Personally, I've always thought of "X-driven" as a mechanically derived metaphor. This shaft drives that gear, which drives that wheel, etc. It's a short form for "what is the most powerful mechanism of this complex thought machinery".

bjt 8 days ago

We did a book club on this at work a few years back. I found it extremely repetitive.

__rito__ 8 days ago

Is this a good resource for someone starting a non-trivial OSS? Or something a solopreneur will derive value from? Can you suggest me some books/other resources that will be of value to solo developers?

ysofunny 8 days ago

software architecture is like regular architecture but civil engineering does not exist because there hasn't been an Isaac Newton of software. I'd say the closest so far is Claude Shannon

  • JackMorgan 8 days ago

    No software engineering practice, architecture, language, or tooling is known to be more effective because we don't even have units of measurement. We are still in the "hope it doesn't fall down" stage of software engineering.

    This has profound effects for self reported productively.

    For example, biking feels faster than driving 30mph with the windows up down a small suburban road with lots of stop signs. But typically the driver will still get 20 blocks away much much faster.

    However if we had no units of measurement, everyone would be arguing the bike was faster.

    This is where we are with software engineering.

    • viraptor 8 days ago

      Also: We don't repeat the same thing enough times in the open to be able to compare. The things we do repeat stay private.

      The only companies that really have enough data are the consulting giants. And they have enough perverse incentives that I would be extremely sceptical of their data.

    • danielovichdk 7 days ago

      Great analogy with the bike and car. I will use that in the future

  • jillesvangurp 7 days ago

    This is actually the whole mistaken premise that underlies the notion of software architecture and design. Building software is not at all like building a bridge or a sky scraper. It's more similar to designing those.

    With big architecture projects you first design them and then build them. This is a lot of work. You have to think about everything, run simulations, engage with stakeholders, figure out requirements and other constraints, think about material cost, weight, etc. Months/years can be lost with big construction projects just coming up with the designs. The result is a super detailed blueprint that covers pretty much all aspects of building the thing.

    It's a lot like building software, actually. These design projects have a high degree of uncertainty, risk, etc. But it's better than finding out that it's all wrong before you start building and using massive amounts of people, concrete, steel, and other expensive resources. But when have you ever heard an architect talk about making a design for their design to mitigate this? It's not a thing. There might have been a sketch or a napkin drawing at some point at best. SpaceX has introduced some agile elements into their engineering, which is something they learned from software development.

    With software, your finished blueprint is executable. The process of coming up with one is manual, the process of building software from the blueprint is typically automated (using compilers and other tools) and so cheap software developers do it all the time (which wasn't always the case). The process of creating that executable blueprint has a lot of risks of course. And there might be some napkin/whiteboard designs here and there. But the notion that you first do a full design and then a full implementation (aka. waterfall) was never a thing that worked in software either. With few exceptions, there generally are no blueprints for your blueprint.

    Read the original paper on Waterfall by Royce. It doesn't actually mention waterfalls at all and it vaguely suggests iterating might be a good idea (or at least doing things more than once). He fully got that the first design was probably going to be wrong. Agile just optimized away the low value step of creating designs for your blueprints that becomes apparent when you iterate a lot.

  • dgb23 7 days ago

    We have or rather could have data and metrics. We just tend to ignore them outside of specific domains.

    For example, from glancing at this summary and table of contents, it seems like there’s no to little mention of performance metrics. What is architecture good for if it doesn’t account for what the computer actually does?

    Even in terms of development productivity or UI: why don’t we have mathematical models to decribe the mental stack one needs to develop, change, extend and more importantly use software?

    Why are compute resources (human or machine) rarely a consideration when they have a real, measurable impact on interacting with software as developers or users?

    • gfairbanks 7 days ago

      > it seems like there’s no to little mention of performance metrics.

      The book uses the jargon from the architecture community. Chapter 12 section 11 on Quality Attribute Scenarios is what you're looking for. But [1] seems to be a summary of Michael Keeling's treatment on qualities and scenarios, which I like better.

      My thinking on this has been greatly influenced by Titus Winters who I've been teaching with in Google for the past couple years. He's tied together the ideas of quality attribute scenarios, compile-time tests, monitored metrics, and alerts in a way that is, in hindsight, completely obvious but I've not seen elsewhere. Maybe we can get him to write that up as an essay.

      [1] https://dev.to/frosnerd/quality-attributes-in-software-1ha9

      [2] Titus Winters, Design is Testability, May 8 2024. https://on.acm.org/t/design-is-testability/3038 (note: video doesn't seem to be posted yet)

      • dgb23 7 days ago

        Thank you!

        I read the article about quality attributes you linked. Interestingly it mixes attributes that are measurable, like performance and availability, but also ones that are very hard to measure like extensibility and adaptability.

        The latter group of attributes are often not measured at all in my experience. I don’t even know of a way that lets me quantify those in a sound, practical way.

        The opinions on how to optimize for these attributes are conflicting and full of contradictions. Nobody seems to have a clear model.

        What is your take on this?

  • nine_k 8 days ago

    While I agree with the general idea of your comparison, I must note that traditional architecture also involves a lot of deliberation and choices not dictated by formulas. Say, the Westminster Palace certainly involves some bits of civil engineering proper, but its defining features (the ornate texture, the iconic clock tower, the internal layout) are dictated mostly by functional and aesthetic choices.

    Same applies to much of software.

RcouF1uZ4gsC 7 days ago

“Bus factor” in my mind is one of ways engineers self-sabotage.

How much are they paying the CEO?

Don’t they consider their replacement costs in the hundreds of millions.

MBA management types strive to convince everyone they are irreplaceable.

Engineers proudly talk about how they are easily replaceable.

Guess who gets shit on?

It is actually good that losing engineering talent be extremely painful for a company. This helps prevent the slow loss of engineering culture like what happened at Boeing.

matt_lo 8 days ago

Great book. I think it’s more ideal for folks who solution design day-to-day. Better when you have experiences to relate back with.

  • gfairbanks 7 days ago

    > Better when you have experiences to relate back with.

    100% this. I've been teaching software design since the 1990's and it's so much easier when the audience has enough experience that I can wave my hands and say "you know when things go like this ... ?" and they do, and then we can get on with how to think about it and do better.

    Without that, it's tedious. Folks with less experience have the brainpower to understand but not the context. I try to create the context with a synthetic example, but (again, waving hands...) you know how much richness your current system has compared to any example I can put on a page or a slide.

Sakos 8 days ago

So is this a good book? Are there any other software architecture books that are recommended reading?

  • JackMorgan 8 days ago

    I learned a lot about good software design from Structure and Interpretation of Computer Programs. It's free and has 350+ exercises. I just committed to doing one a day for a year.

    I learned so much from that book it's like I aged ten years in that one year. Revolutionary for me.

    I also loved

    - Designing Data-Intensive Applications

    - Design Of Everyday Things

    I've got a list of other influential books with my thoughts here:

    https://deliberate-software.com/page/books/

    My controversial takes are that the whole series of Domain Driven Design books are pretty poor. I've seen several teams fall into a mud pit of endless meetings arguing about entities vs repositories. Same thing with Righting Software. The books are all filled with vague statements so everyone just spends all this time debating what they mean. It turns teams from thinking critically into religious bickering. Same for all the design patterns books.

  • mbb70 8 days ago

    Designing Data-Intensive Applications by Martin Kleppmann. It's my "this is the book I wish I read when I started programming"

    • adamors 8 days ago

      When you started? I mean it’s a good book but it would be wasted on beginners.

      • mbb70 8 days ago

        Maybe not when I started, but after years of hard lessons working with HDFS, Cassandra and Spark (not to mention S3, Dynamo and SQS) seeing all those hard lessons pinned down like butterflies in a display case made me jealous of anyone who found this book early

    • notduncansmith 8 days ago

      I read this book in the first few years of programming professionally, and in my naïveté I was so eager to apply the patterns therein I missed out on many opportunities to write simple, straightforward code. In some ways it really hindered my career.

      I don’t blame this on the book, of course; ultimately the intuition it helped me build has been very helpful in my work. With that said, as a particular type of feisty and eager young programmer at the time, I can now say I would have benefitted at least as much from a book titled “Designing Data-Unintensive Applications” :)

    • pragmatic 8 days ago

      We’re about due for a new edition with some updates. I read it again just recently and in certain sections I would love some updates. Many things have changed in 7 years.

    • JackMorgan 8 days ago

      An absolutely excellent book! I learned so much going through it slowly with a reading group I started at my job.

  • hemantv 8 days ago

    Game Programming Patterns was one which had a big impact on me.

    Other was Effective Engineer

    • vendiddy 8 days ago

      Is Game Programming Patterns relevant to those who are not building games?

      • mon_ 8 days ago

        Relevant excerpt from the Introduction chapter:

        ---

        Conversely, I think this book is applicable to non-game software too. I could just as well have called this book More Design Patterns, but I think games make for more engaging examples. Do you really want to read yet another book about employee records and bank accounts?

        That being said, while the patterns introduced here are useful in other software, I think they’re particularly well-suited to engineering challenges commonly encountered in games:

        - Time and sequencing are often a core part of a game’s architecture. Things must happen in the right order and at the right time.

        - Development cycles are highly compressed, and a number of programmers need to be able to rapidly build and iterate on a rich set of different behavior without stepping on each other’s toes or leaving footprints all over the codebase.

        - After all of this behavior is defined, it starts interacting. Monsters bite the hero, potions are mixed together, and bombs blast enemies and friends alike. Those interactions must happen without the codebase turning into an intertwined hairball.

        - And, finally, performance is critical in games. Game developers are in a constant race to see who can squeeze the most out of their platform. Tricks for shaving off cycles can mean the difference between an A-rated game and millions of sales or dropped frames and angry reviewers.

        • richrichie 8 days ago

          Performance part alone is worth reading about game development for non game developers. Retail off the shelf machines these days are so powerful that it encourages sloppy design and development.

          • water-your-self 8 days ago

            Performance is very much still relevant in the modern day, even given the area under the curve of moores law

      • zamalek 8 days ago

        Games are just normal software dev dialed up to ten, though there are many problems that game developers enjoy not having to care about (and visa-versa). Attempting to make a basic 3D engine is probably a good exercise for all developers - even if it goes uncompleted.

    • smnplk 8 days ago

      Is GPP relevant if you don't want to do OOP ?

  • yashap 8 days ago

    Haven’t read this one, but have read a few - “Domain Driven Design” (by Eric Evans) was the most influential for me.

    • jojohohanon 8 days ago

      I am very interested in learning more / honing my design skills,

      But

      Like everyone else my time is extremely limited. Could you say a few words about DDD stood out for you and what other books you might compare it to?

      • doctor_eval 8 days ago

        I think DDD is a great book but like many of these kinds of things, I also think it's one of those books that has a couple of chapters of good ideas and then a dozen chapters of filler.

        That said, ideas like ubiquitous language and bounded contexts are extraordinarily powerful, and definitely sharpened my own observations, so despite the filler, all in all I'd say it's one of the keystone books on software design and definitely worth reading.

        Another thing that DDD talks about, and is relevant to this, is that design and implementation are two sides of the same coin. In "Just Enough", Fairbanks says, "Every architect I have met wishes that all developers understood architecture". Well, I am not kidding when I say that I wish every architect I've ever met understood software. A lack of understanding of the technical constraints of computing is just as likely to lead to failure as misunderstanding the business constraints. They are both critical to success, and these people should be working and learning from each other, rather than operating in a hierarchy.

        To that point, one of the most influential things I've ever read was Code as Design: https://www.developerdotstar.com/mag/articles/reeves_design_...

        Since it's a series of essays, it has all the detail and none of the filler.

      • sbayeta 8 days ago

        (Not gp) I read this book more than a decade ago, when I was very inexperienced. The thing I remember the most, and I think the most valuable to me, is the idea of defining a shared domain language with the business domain experts, with a clearly defined meaning for each concept identified. For instance, what a "frozen" account exactly means, what's the difference with a "blocked" account. These are arbitrary, but must be shared among all the participants. This enables very precise and clear communication.

        • darkerside 8 days ago

          I think that's the core idea. The layer on top the idea that that domain language should be expressed in an isolated core layer of your code. Here lies all your business logic. On top of that, you build application layers to adapt it as necessary to interact with the outside world, through things like web pages, emails, etc. as well as infrastructure layers to talk to your database, etc.

        • zeroCalories 8 days ago

          At my job we have a shared glossary and data model that we use with business people, but do you really need a whole book on that?

          • jameshart 8 days ago

            DDD is about a lot more than just defining what it calls ‘ubiquitous language’. It helps you figure out how to constrain which concerns of the language used in one domain need to affect how other domains think about those things - through a model it calls ‘bounded contexts’. Like, in your fraud prevention context, ‘frozen accounts’ might have all sorts of nuances - there might be a legal freeze or a collections freeze on the account, with different consequences; outside the domain, though, the common concept of ‘frozen’ is all that’s needed. DDD gives you some tools for thinking about how to break your overall business down into bounded contexts that usefully encapsulate complexity, and define the relationships between those domains so you can manage the way abstractions leak between them.

            No silver bullet, of course, but, like most architectural frameworks, some useful names for concepts that give you the metavocabulary for talking about how to talk about your software systems.

            This brief chapter from the O’Reilly Learning DDD book gives a good flavor of some of the value of the concepts it introduces: https://www.oreilly.com/library/view/learning-domain-driven-...

            • zeroCalories 8 days ago

              Thanks for the explanation, I'll take a deeper look.

          • aspenmayer 8 days ago

            In highly regulated industries like banking or other highly-secure environments, it’s a gradient between an internal wiki or FAQs etc, to what you have as an example of something more expansive and explicit, to an entire book, to an entire department or business unit for more important concepts that may vary between jurisdictions or be less explicitly defined, but no less important or impactful to the running of the business or group.

      • crdrost 8 days ago

        Not the person you're asking but thought I would chip in.

        Domain Driven Design has a lot of good ideas, but then I consistently see them misrepresented by others who seem to half-understand it?

        Probably the clearest example is the idea of a “bounded context.” You will find microservice folks for example who say you should decompose such that each microservice is one bounded context and vice versa, but then when they actually decompose a system there's like one microservice per entity kind (in SQL, this is a relation or table, so in a microservice pet-adoption app there would be a cat service, a dog service, a shelter service, an approval service, a user service...).

        The thing is, Eric Evans is pretty clear about what he means by bounded context, and in my words it is taking Tim Peters’s dictum “Namespaces are one honking great idea—let’s do more of those!” and applying it to business vocabulary. So the idea is that Assembly Designer Jerry on the 3rd floor means X when he says “a project,” but our pipefitter Alice on the shop floor means something different when she says “a project,” and whenever she hears Jerry talking about a project she is mentally translating that to the word “contract” which has an analogous meaning in her vocabulary. And DDD is saying we should carefully create a namespace boundary so that we can talk about “pipefitting projects” and “pipefitting contracts” and “assembly-design projects” and then have maybe a one-to-one relationship between pipefitting contracts and assembly-design projects. Eric Evans wants this so that when a pipefitter comes to us and tells us that there's a problem with “the project,” we immediately look at the pipefitter project and not the assembly-design project. He really hates that wasted effort that comes from the program not being implemented in vocabulary that closely matches the language of the business.

        So if the microservices folks actually had internalized Eric's point, then they would carve their microservices not around kinds of entities, but rather archetypes of users. So for the pet adoption center you would actually start with a shelter admin service and a prospective adopter service and a background checker service, assuming those are the three categories of people who need to interact with the pet adoption process. Or like for a college bursar's office you would decompose into a teacher service, student service, admin service, accountant service, financial aid worker service, person who reminds you that you haven't paid your bills yet service.

        So I thought it was a really good read, that's actually a really interesting perspective, right? But I don't think the ideas inside are communicated with enough clarity that I can bond with others who have read this book. It is kind of strange in that regard.

    • globular-toast 8 days ago

      One of my favourite books. Some people seem to prefer "Implementing Domain-Driven Design" by Vaughn Vernon. I have both on my shelf but I've never been able to read much of the latter.

      Clean Architecture by Robert C. Martin has a lot of the same stuff distilled into more general rules that would apply to all software, not just "business" software. It's a good book but overall I prefer DDD.

  • ChrisMarshallNY 8 days ago

    Writing Solid Code, from 30 years ago, by Steve Maguire, was a watershed, for me.

    Many of the techniques, therein, are now standard practice.

    It also espoused a basic philosophy of Quality, which doesn’t seem to have aged as well.

  • mehagar 8 days ago

    Clean Architecture and Fundamentals of Software Architecture: An Engineering Approach.

    • GiorgioG 8 days ago

      Ugh enough with clean architecture. So much boilerplate. Uncle Bob please retire.

revskill 8 days ago

I'm tired of reading "arbitrary terms" to try to standardize something seems to subjective.

Give me the mathematical model and that's it. No more vague, human-created terms to try to translate your own ideas, that's just a hack.