kryptiskt 3 days ago

I think "trust, but verify" (as mentioned in the article) is a much more useful motto than "never trust anyone". The latter isn't an useful attitude, if you took it seriously you would have carefully check or rewrite everything from the ground up. And then you'd either have to trust the hardware anyway or enlist in a course on VLSI design. "Trust, but verify" is much more practicable, at least if you don't feel the need to verify absolutely everything[0], but is content with doing spot checks of all the features.

So, good article with a misleading title. Don't be paranoid.

[0] I don't consider 100% test coverage as anywhere near close enough for that.

  • fatnoah 3 days ago

    > "Trust, but verify"

    Even though my entire career has been software, I was an Electrical Engineering major, so I have taken VLSI design (and even designed an 8-bit ALU). My first job was writing embedded software, and would frequently "trust, but verify" the hardware through the use of a logic analyzer.

    When I pulled out printouts from the analyzer to show the hardware team that the hardware had a bug, the surprised and incredulous look on their faces was priceless. The blow was somewhat softened by the fact that I'd also found a software bug.

    The constant blame shifting between SW and HW teams is one reason I left that job after less than a year.

    • Spivak 3 days ago

      This is why blameless postmortems are such good thing.

      Going one step further having a culture where finding a bug or defect in your own code/design is rewarded makes it so people aren't afraid but excited to talk about them.

      • fatnoah 3 days ago

        > This is why blameless postmortems are such good thing.

        100%. I'm very much a fan of asking _what_ happened and then figuring out how to prevent it from happening ahead. Look back to inform the future, not to find blame. Ultimately, if 1 person can blow things up, it's a bigger issue at play.

      • ImHereToVote 3 days ago

        "Blame is for small children and God"

  • danielmarkbruce 3 days ago

    In some sense you are saying "i can't stand making people uncomfortable so I contort reality".

    If you have to verify, you don't trust. Google "trust definition". Here is the first result:

    "firm belief in the reliability, truth, ability, or strength of someone or something".

    There is no reasonably sized body of code I ever wrote where I'd have a "firm belief" it was error free.

    There are many situations where we shouldn't believe someone's work is error free. It's fine. Anyone who has ever worked in a field where it can be shown that some work has an error knows how many errors humans make. Anyone who is honest with themselves in the software business knows just how easy it is to make an error.

    If you need a pithy phrase: "Assume good intent and capability, but verify work".

  • bitwize 3 days ago

    "Trust, but verify" comes from the Soviet Union and is an example of Russian humor. It helps that the Russian words for "trust" and "verify" rhyme.

    It really means something like "act like you trust the person, but secretly, don't trust them and double check that they've fulfilled their commitments". Putting on a smiling face and acting as if everybody was acting in good faith, while secretly expecting the stab in the back because you would do the same, was key to diplomacy in the USSR, even between departments of the same government.

    • lpribis 3 days ago

      "doveryay, no proveryay".

      It rolls off the tounge so much better in Russian.

    • ZoomZoomZoom 3 days ago

      Don't know how you come up with so much specific implicit meaning for the phrase. It simply means you can have high expectations but never a 100% guarantee.

      It's interesting to note that English wiki has an article about the phrase, but Russian doesn't.

  • getpost 3 days ago

    > Don't be paranoid.

    How do you mean? I think the article (and my experience) suggests that you do have to be paranoid. [I looked up paranoid, just to be sure I knew the exact definition, and I didn't. It's an "extreme and irrational" fear. Is looking it up parnoid? Hahaha.] Colloquially, paranoia is extreme and not necessarily irrational. Think of Andy Grove, "Only the paranoid survive." Or Kurt Kobain, "Just because you are paranoid, it doesn't mean they're not after you."

    Anyway, the way I frame the issue of software quality, is to hold the view that there are always errors, and the best you can do to apply extreme vigilance in attempting to ensure errors occur rarely.

  • bigstrat2003 3 days ago

    If you have to check up on someone to make sure they aren't screwing up (or misleading you), then you don't actually trust them. Thus, "trust but verify" is not trust at all.

  • JohnFen 2 days ago

    I think "trust but verify" is logically identical to "don't trust", to be honest.

    But trust isn't a binary, all-or-nothing sort of thing. There are always degrees. "Trust but verify" makes that explicit.

  • mistermann 3 days ago

    > I think "trust, but verify" (as mentioned in the article) is a much more useful motto than "never trust anyone". The latter isn't an useful attitude, if you took it seriously you would have carefully check or rewrite everything from the ground up.

    Not actually. You're describing an abstraction: your opinion/perception of what must/can be done.

    It is possible to be comfortable with uncertainty and the unknown (everyone already is, but only in certain, intuitive (in large part due to cultural conditioning, which comes in a variety of forms) ways), it's mainly just counter-culture and counter-intuitive, thus needs strategies, and practice(!) (plus some non-trivial multi-level, multi-dimensional recursion....this is what us HN folks are good at, and love though, right? Right?[1]). We've all been through the hard work at least once, in a certain (mostly) shared way. There are other ways though.

    > "Trust, but verify" is much more practical

    How do you verify your verification in complex scenarios though? I bet I know: trust/contentment (in your verification skills), though this layer typically is not revealed to us, so causes no psychological unrest ("all is well"), because it does not exist.

    > Don't be paranoid.

    What do you think your reaction would be if you discovered this is not just wrong, but backwards?

    [1] Alternatively: maybe we are only good at it, and only love it, sometimes? But then, "we" is a complex and deep set, into which we have little insight, but also plenty of hallucinated "insight".

Joel_Mckay 3 days ago

After many years, I settled on a constraint based design philosophy:

1. type checking, data marshaling, sanity checks, and object signatures

2. user rate-limits and quota enforcement for access, actions, and API interfaces

3. expected runtime limit-check with watchdog timers (every thread has a time limit check, and failure mode handler)

4. controlled runtime periodic restarts (prevents slow leaks from shared libs, or python pinning all your cores because reasons etc.)

5. regression testing boundary conditions becomes the system auditor post-deployment

6. disable multi-core support in favor of n core-bound instances of programs consuming the same queue/channel (there is a long explanation why this makes sense for our use-cases)

7. Documentation is often out of date, but if the v.r.x.y API is still permuting on x or y than avoid the project like old fish left in the hot sun. Bloat is one thing, but chaotic interfaces are a huge warning sign to avoid the chaos.

8. The "small modular programs that do one thing well" advice from the *nix crowd also makes absolute sense for large infrastructure. Sure a monolith will be easier in the beginning, but no one person can keep track of millions of lines of commits.

9. Never trust the user (including yourself), and automate as much as possible.

10. "Dead man's switch" that temporarily locks interfaces if certain rules are violated (i.e. host health, code health, or unexpected reboot in a colo.)

As a side note, assuming one could cover the ecosystem of library changes in a large monolith is silly.

Good code in my opinion, is something so reliable you don't have to touch it again for 5 years. Such designs should not require human maintenance to remain operational.

There is a strange beauty in simple efficient designs. Rather than staring at something that obviously forgot its original purpose:

https://en.wikipedia.org/wiki/File:Giant_Knife_1.jpg

https://en.wikipedia.org/wiki/Second-system_effect

Good luck, and have a wonderful day =3

agentultra 3 days ago

To me, this is the argument for formal verification. I don't want to hear a hand-waving explanation that this algorithm will always complete. If the algorithm is sufficiently complex I want proof. Otherwise, why would I believe you?

Abstractions, in the mathematical sense, always hold (unless there is a flaw in the definition itself). Axioms in any sense are always going to throw a wrench in things. Thank Godel. But that shouldn't mean we cannot make progress.

Do the work, show your proof! Think hard!

Although sometimes all you need are a few unit tests.

They key is to develop the wisdom to know when unit tests aren't sufficient for the task at hand.

  • atrus 3 days ago

    > Abstractions, in the mathematical sense, always hold (unless there is a flaw in the definition itself). Axioms in any sense are always going to throw a wrench in things. Thank Godel. But that shouldn't mean we cannot make progress.

    But this seems like an argument against formal verification. Formal verification is 100x harder than writing tests...and still doesn't guarantee correctness? Those axiom wrenches are still there, those flaws in the definition are still there, not to mention the flaws in the proof writing. All that extra effort for what gain?

    • agentultra 3 days ago

      It guarantees correctness vis. the axioms chosen. That's a much more powerful statement and guarantee than a unit test which only exercises a single example.

      A formal proof that an algorithm makes progress, doesn't require a lock, or whatever property the proof is arguing is 100% guaranteed for every case.

      For example, a simple function over the set of integers. A unit test can only test individual elements of the set. A proof demands more: the property must hold over all elements of the set.

      The Incompleteness Theorem puts a limit on the provability. That hasn't stopped mathematicians from pursuing the formalization of mathematics. It shouldn't stop computer scientists and programmers either. In fact it tends to make us more honest about the limits and capabilities of our systems.

      • senkora 3 days ago

        Agreed.

        A good set of unit tests partitions the input space into parts, and then provides an existence proof that the program is correct for an example chosen from each part.

        A good formal proof does the same, but provides a universal proof that the program is correct for all examples chosen from each part.

        Both strategies can fail if they miss an important part of the input set. Unit tests can also fail if a program is “adversarial” and fails in a specific input that isn’t the particular example.

        In practice, achieving “a good set of unit tests” requires you to mentally work out how to partition the input set in a way that matches your program, and at that point you’re most of the way to proving it correct, so you might as well do that. It still might make sense to write unit tests if you don’t have the tooling to enforce a mechanical proof.

  • mbonnet 3 days ago

    Formal verification is nice, but requires crazy skilled people at high cost. It's not always practical/feasible to have as part of your SWE process.

    • agentultra 3 days ago

      And it's not necessary for every project.

      It's nice to have when it is though.

      The cost is coming down. Proof automation is incredible today and rapidly improving. This is the part that does the tedious parts of a proof for you so that you can focus on the theorems that matter. The languages and proof systems themselves are easier than ever to pick up and use which is bringing the skill cost down.

      I don't think most software projects need a huge, dedicated team of specialists to benefit from formal software verification.

  • bee_rider 3 days ago

    I dunno. Even in some fairly math-y situations, line solving sparse linear systems, we get stuff like BiCGStab which often will work well and converge in cases where we can’t prove that it must.

    CS seems mostly good for proving that everything is hopeless in the rigorous general case and, hey, here are some heuristics to give you hope again.

bruce511 3 days ago

>> Random access of a character in a text buffer could take constant time (for ASCII) or linear time (for UTF-8) depending on the character encoding

This is true, but incomplete. All unicode encodings take linear time, not just utf-8.

That's because a character can contain multiple code points. Utf-32 allows for random access of code points in constant time, but not characters.

Utf-16 has variable length code points so it is the same as utf-8 in that regard.

  • darby_nine 3 days ago

    I've come to the conclusion that "character" is worth abandoning as a coherent concept. Words and bytes are much more useful and easier to define and aren't dependent on which runtime you're using, and the number of times it's worth slicing a word into characters is actually pretty low. One exception to this might be chinese (and I'm sure others) where differentiating words is not a straightforward task of splitting by whitespace, but they also have much more straightforward glyph rendering bypassing most of unicode's nastiness. Random access strings in chinese are actually super straightforward once you abandon run length encoding to heavy users of ascii and emoji.

  • mjevans 3 days ago

    IMO within a small range iterating isn't too painful.

    It's probably safe, maybe even close enough to optimal for typical use, to have an array or list of bytestrings for each line. Or maybe more complex Line (of text) objects that record the byte-length, 'codepoint'-length, and '(display)character'-length. There might even be special cases built in for typical and massive documents (number of lines) and overly long lines.

    • bluGill 3 days ago

      In most real world cases N is small and so a linear search for data beats a binary search - the binary search is going to stall the pipeline with cache misses all the time, while the linear search will prefetch everything into the cache before you need it thus resulting in not pipeline stalls.

      Of course different computers (CPU, memory configuration... they all matter) have different characteristics so where N is large enough is different for each. However in general linear search is fast enough these days. Where it isn't you can look at a profile to verify you hotspot.

  • LegionMammal978 3 days ago

    Really, it's not even random access per se that takes linear time, but random access indexed by code points. You can access the middle of a UTF-8 string just fine if you index by byte position. (Or more generally, by code units for UTF-16.)

  • myworkinisgood 3 days ago

    Yeah, but the author was talking about ASCII. ASCII takes constant time.

    • bruce511 3 days ago

      The author differentiated I the statement between ASCII and utf-8. The reference to the specific encoding (utf-8) was misleading because the reference should have been to Unicode (the mapping) as the issue applies to all Unicode encodings, and is not limited to utf-8.

      ASCII does indeed take constant time - which I agreed with.

nottorp 3 days ago

My favourite advice to give out is:

"If you feel very smart after writing a particularly intricate piece of code, it's time to rewrite it to be more clear."

Speaking of not trusting yourself.

  • will1am 3 days ago

    Clear and maintainable code is valuable

pcwelder 3 days ago

High quality article with some new advices.

> Read more documentation than just the bare minimum you need

I wish I practiced this before, I'd be as good and quick as some of my brilliant colleagues.

  • protomolecule 3 days ago

    I usually do, but not a long time ago after I've read docs on boost::spirit, gpt4 suggested a trick with which I could've come up myself but I failed to connect two different pieces of documentation.

    This left me wondering if reading the docs thoroughly is still a good investment.

    On the other hand, having some level of knowledge of a library saves me some time since I can often write things faster by myself than explaining gpt4 in more and more details what I need.

  • skydhash 3 days ago

    I love it when projects gave single HTML documentation or a PDF. In the worst case, I'd take a nice site like Laravel's documentation. Any piece of software I use, I try to do a speed reading of the whole documentation or a significant part if it's big. A mental model of all the offered features is a nice help when solving problems.

atribecalledqst 3 days ago

> Failing tests indicate the presence of bugs, but passing tests do not promise their absence.

As somebody who primarily lives on the testing side of the house, I've definitely run into cases where the developer promises that their unit tests will make a new feature less buggy, then about 5 minutes later I either find a mistake in the test or I find a bug in something that the developer didn't think to test at all.

I've also seen instances where tests are written too early, using a data structure that gets changed in development, and then causes churn in the unit tests since now they have to be fixed too.

I've generally come to think that unit tests should be used to baseline something after it ships, but aren't that useful before that point (and could even be a waste of time if they take a long time to write). I don't think I'll ever be able to convince anybody at my company about this though lol

  • gchamonlive 3 days ago

    Tests are akin to scientific experiments. They test hypothesis and try to falsify claims. They shouldn't be seen as ground truth, but ways to gain information about what the system claims to be doing. In this sense it makes sense that tests will become obsolete or evolve with the system, because the model and domain upon which the system is based also evolves and changes with time.

    • randomdata 3 days ago

      This is why I'm not sure more languages don't instil the idea of "public" and "private" tests like Go does.

      Your "public" tests should document the API for future programmers. This is the concrete contract that should never change, no matter what happens to the implementation. If these tests break, you've done something wrong.

      Your "private" tests are experiments that future programmers know can be removed if they no longer fit the direction of the application.

      • kubanczyk 2 days ago

        So, test the compatibility guarantee (of your major number in semver).

        How is this formalized in Go?

  • fendy3002 3 days ago

    Unpopular opinion, but I always say that unit tests are contracts for the API. If you don't want / don't need to make contract, don't do unit tests.

    Unit tests main purpose is not to improve code or reduce bug, it's main purpose is to verify the code to work against the contract that's defined in unit tests. Code improvement or bug reduction are added benefit, if any.

    • dartos 3 days ago

      > Unpopular opinion, but I always say that unit tests are contracts for the API

      You’re talking about integration tests or e2e tests.

      Those don’t sound like unit tests.

      • bluGill 3 days ago

        Unit tests testing the API - if there is no API there then the level of unit it probably too low. The purposes of all tests is to say "No matter what this will never change" - while never is a bit to strong as you are allowed to make changes any API that your unit tests covers will be painful to change, both because the tests will also have to change and so you have nothing to guide you, and also because odds are you don't have good coverage from other tests (integration tests would catch issues, but you rarely have all cases covered).

        Or to put it a different way, your unit tests should cover units that have a good boundary to the rest of the system. This should sound like a module, but there is reason to have module as a larger thing than your unit (most of the time there shouldn't be, but once in a while this is useful), and so while there is overlap it is often useful to consider them different.

        Integration tests cover the API, but they do not test the API (well they often use some API as well, but the won't cover all your internal APIs.)

      • matt_j 3 days ago

        API doesn't imply integration. Consider any module or package to have an API exposed to the user of the package. Unit tests should assert that the package behaves as expected.

        • bluGill 3 days ago

          By users I assume you mean other developers who use the API (including you next week). It would be better to use a different term as often user means "end user" or "customer" and not internal users.

          • matt_j 3 days ago

            Yeah, I'm not sure what the better word is. As a programmer, I use APIs. "interact with", "code to/against". I think "user" is OK. We're all users at different levels of abstraction.

      • randomdata 3 days ago

        That's exactly what Beck described when he originally coined the term.

      • thfuran 3 days ago

        Your units don't have some interface through which they interact with other units?

        • bobthepanda 3 days ago

          API is accesible public interface, I feel like I’ve seen it used to talk about accessible “to the public/other teams” from a service standpoint but not to describe any sundry public methods of a file.

  • marcosdumay 3 days ago

    > I've also seen instances where tests are written too early, using a data structure that gets changed in development

    That's as clear a signal that they are testing the wrong interface as you can get.

    Unfortunately, developers think of tests as testing code, not interfaces. As a natural consequence, they migrate towards testing the most complex break-down of their code as they can; it increases the ratio of code coverage / number of tests... at the cost of functionality coverage.

    • Spivak 3 days ago

      There's two kinds of tests I think developers are trying to write and I think both of them have merits. Writing a test for the code is actually totally fine so long as the reason that you're doing it is because you want to be able to depend on that and you need something to scream if ever changes.

      I think starting with a test that the code does what the code does is actually a pretty good starting point because it's mechanical. And if you never end up revising that code, it can just live there forever, but if you do end up revising the code, those tests will slowly morph overtime into testing the interface. When you actually do your revision, you get a very clear signal that like hey this test that is now failing as a result of the change, clearly that can't be the important part.

      Waiting for the change to see what stays the same I think is often more accurate than trying to guess what the invariants are ahead of time.

      • marcosdumay 3 days ago

        I get the impression you are talking about taking over legacy code.

        Well, the invariants you want to test are what people want the software to do. If you create those test from the beginning, there's nothing to guess. But of course, if people write a bunch of code and throw that knowledge away, you have to recover it somehow, and it will necessarily involve a lot of guessing.

  • nullserver 3 days ago

    I inherited an ancient project that had literally tens of thousands of test.

    I reviewed hundreds of them, tried rewriting dozens of them. Eventually, realize that essentially all of the tests were just testing that mock data being manually manipulated by the test gave the result of test expected.

    Absolutely nothing useful was actually being tested.

    Some team spent a couple of years writing an unholy number of tests as a complete waste of time. Basically just checking off a box that code had tests.

    • randomdata 3 days ago

      The first box on the testing checklist states that the test should first fail. I wonder how they managed to not test anything while seeing the tests transition from failure to success.

      • bluGill 3 days ago

        From the GP: mock data being manually manipulated by the test gave the result of test expected.

        These tests are easy to write - your mock returns something, and then you verify that the API does nothing (thus the test fails), then returns whatever the mock does and the test passes. These tests are easy to write and they do fail until code is written. However they are of negative value - you cannot refactor anything as the code only calls a mock, and returns some data from the mock.

        • randomdata 3 days ago

          I can't imagine writing a function that is nothing more than an identity function would be easy to write (unless it was explicitly intended to be an identity function, I suppose). There must be some terrible gut wrenching feeling that goes along with it, if nothing else? Frankly, I don't understand how this situation is possible in practice, but perhaps I misunderstand what is written?

    • drewcoo 3 days ago

      I've never seen that but I've heard the claim before.

      So . . . is the team of devs who spent years writing and maintaining those tests incompetent or is it the new dev with the complaint? If it was the whole team, how did that happen?

  • HankB99 3 days ago

    > something that the developer didn't think to test at all.

    Raising hand, guilty as charged. I test for things based in my concept of how the system works, but those darn users may have other ideas!

    Actually, when I found a user who seemed to have a knack for finding bugs, they were gold and I let them know I appreciated their efforts.

    > unit tests should be used to baseline something after it ships

    That has not been my experience. I found that unit tests let me get the pieces working properly so that when assembled, the chances that everything worked as expected were much improved.

    • willcipriano 3 days ago

      > those darn users may have other ideas

      Getting those ideas in front of you is the job of the product team.

      It isn't your fault if the guy writing the ticket refuses to talk to the users/and or/you.

      "Ticket meets acceptance criteria, please submit a new request for this new feature."

  • sameoldtune 3 days ago

    My gift to any team I work with is an integration test harness that usually has some kind of DSL for setting up state. This looks wildly different depending on the project. But my theory is that if tests are easy to write then it is easy to make more of them. So it is worth it to write some ugly code one time under the hood to make this happen.

    If every test requires copy pasting a bunch of sql statements and creating a new user and data, my experience is the team will have 3-4 of these kinds of tests. But if the test set-up code looks like `newUser().withFriends(3).withTextPost(“foo”).withMediaPost().sharingDisabled().with…` then the team is enabled to make a new integration test any time they think of an edge case.

    • williamdclt 3 days ago

      we have _exactly_ that (down to the `withXXX` syntax), and indeed I found it great.

      Downside is that test setup can be a bit slow: each `withXXX` can create more data than really necessary (eg a "withPost()" might update some "Timeline" objects, even though you really don't care about timelines for your test). Upside is that it's a lot closer to what happens in reality, regularly finding bugs as side-effect. And also you align incentives: you make your tests faster my making your application faster.

  • dartos 3 days ago

    I’ve been preaching this for a while.

    When a codebase gets too big, and devs gets too clever with their tests, the whole test suite becomes complicated.

    If your test suite is approaching the complexity of the actual codebase (what with layers of mocks and fixtures that are subtly interdependent,) how could you be expected to trust a test you wrote more than the code you wrote.

    • bloopernova 3 days ago

      I (sysadmin/devops) am writing some nodejs and the complexity of the tests is confusing to me. I'm a nodejs beginner to be sure, and I'm not experienced enough to verify what copilot gives me.

      All those mocks, and other Jest code, all seem overly complicated but I don't know of anything "better".

  • packetlost 3 days ago

    > I've generally come to think that unit tests should be used to baseline something after it ships, but aren't that useful before that point

    I disagree, but not entirely. I think there's a balance. Tests can be a great way to execute code in isolation with expected inputs/outputs and can help dramatically in absence of other ways to execute the code. But in general, I mostly agree. Tests are mostly valuable as a way of ensuring you don't break something that was previously working, but they are still valuable for validating assumptions as you go.

  • englishspot 3 days ago

    > I don't think I'll ever be able to convince anybody at my company about this though lol

    especially if the company uses 100% code coverage as a metric for success

  • randomdata 3 days ago

    > I've generally come to think that unit tests should be used to baseline something after it ships

    In theory, but I've never seen anyone successfully write tests after something ships. By that time much of the context that should be documented in tests is forgotten.

    • bluGill 3 days ago

      I've done a handful of tests over the years. Once in a while the original didn't have any tests and I had no confidence I could change it without writing tests. Once in a while the original was buggy and after getting tired of going back top fix bugs I wrote a few tests. Once I had a case where the fix for bug A introduced by B, which the obvious fix was revert this code that made no sense thus bringing back bug A - when someone realized this was happening every year we write a few tests just to stop that pattern.

      The above those is a very rare exception. The general rule is once code is shipped management doesn't allow you time to make it better.

  • drewcoo 3 days ago

    Tests don't find bugs. They find things a developer needs to investigate.

    Most tests I've seen aren't aids to diagnosability, so if there is a bug, the developer is still needed to find it.

    > tests are written too early, using a data structure that gets changed in development

    I wouldn't call this "too early," but "testing the wrong thing" if they were testing internal particulars instead of behaviors.

  • jagged-chisel 3 days ago

    > ... instances where tests are written too early ...

    omg yes. However, reading this makes me wonder how the TDD people handle this.

    • stonemetal12 3 days ago

      It is just the cost of good quality. This is like suggesting you shouldn't write error handling code, because the code might change and have different errors that need to be handled.

      Also if the interface doesn't change but your unit tests fail on a data structure change then perhaps your tests are too coupled.

    • marcosdumay 3 days ago

      Why would anybody doing TDD have tests highly coupled with the implementation? They shouldn't have this problem at all.

      The GP diagnostic isn't good here. Those are bad tests, not tests written too early. What doesn't mean you shouldn't wait for your functionality to be accepted before testing it; some times you should; but this happens for completely different reasons.

    • randomdata 3 days ago

      By using TDD, which promotes isolating the changeable surface area to a small area during discovery. That way you don't have to introduce the complexities of API changes across the rest of the application surface area, avoiding the churn spoken of earlier.

    • bluGill 3 days ago

      Most of the time I already know what I'm going to write. Thus most of the time I can start with a simple test and it isn't too early. It is rare to be presenting with a problem in code that you don't know if it is solvable or how to solve it and jump right into code (as opposed to research, or white board discussions), which then gets enough in place that it isn't too early.

tm11zz 3 days ago

This is a really useful mindset as a programmer, but backfires in real-life as it makes you an anxious person.

  • eru 3 days ago

    I guess you need to compartmentalise into different kinds of 'trust'. The not 'trusting' you do with a computer is different from the trusting you do with fellow humans in daily life. They just happen to use the same word in English.

    • blitzar 3 days ago

      I trust the computer far more than the fellow humans. The computer will generally give a predictable output for a given input. "fellow humans in daily life" .... not so much.

      • will1am 3 days ago

        Human variability and unpredictability can also be a source of creativity. But I understand you completely

      • eru 3 days ago

        I hope you never have to cross a street, or get anywhere near a car with a human behind the wheel.

        • blitzar 3 days ago

          Its terrifying; sometimes when I am crossing the street people accelerate, sometimes they slow down, sometimes they stop at red lights, sometimes they drive through them.

        • will1am 3 days ago

          That's why I don't have a driver's license

          • eru 3 days ago

            Yes, but alas other people do.

  • mistermann 3 days ago

    > but backfires in real-life as it makes you an anxious person.

    If I was to point out this is an approximation, or a tautology (it is only true to the degree that it is true, which is not (necessarily[1]) 100% of the time), would it make you anxious? And if so, do you think it wouldn't be possible for you to learn [1] a new approach so it does not make you anxious?

  • will1am 3 days ago

    While achieving absolute certainty in code correctness is often impossible

  • Zambyte 3 days ago

    Funnily enough, I program in real life and have also been anxious lately.

m0llusk 3 days ago

Also from 8 days ago https://news.ycombinator.com/item?id=40764826 and 9 days ago https://news.ycombinator.com/item?id=40760885.

The "A Python script should be able to run on any machine with a Python interpreter." remark is amusing. Recently ended up installing a whole new distribution version just to get Python 3.11 and the new library versions that the script I was running depended upon.

Giving a program finish and polish does have this kind of unlimited depth, but it is critical to remember that comes after the initial coding which is always rough with many gaps since that is always where things start. And then we make many choices. Hearing people commit to strong typing because docs are always out of date is another chuckle. Maybe if the docs were kept right from the start, maybe with some use of automatically generated reference pages, then there wouldn't be such frequent problems getting types right in the first place? To each their own, but strong typing hype is just another currently popular method among many. Strong typing has its value, but like every other methodology cannot be absolutely trusted to save programmers from error.

atoav 3 days ago

As an electronics guy this is really ingrained. Not only could you easily waste days on a problem if you assume things rather than check them, in some cases you might also get a painful experience or depending on what you're working on that mistake might even be your last, burn down a house, kill others or what not. Checking your priors is one thing, ensuring your stuff fails gracefully if they are abnormal another. Meanwhile most software won't even handle a network disconnect gracefully.

As more and more stuff™ is moving from hardware into software, for totally understandable reasons, the absolute number of software that could absolutely ruin human lifes is growing. This calls for higher standards when it comes to the whole field of software engineering.

  • skydhash 3 days ago

    Hardware fails because of physical rules, software is more abstract and more flexible. I agree that higher standards should be required, but business people won't do it as it will make it as slow as hardware engineering. I'd love to see formal verification for at least the business core of any application. That should incentivize product managers to write better specifications.

    • atoav 2 days ago

      The electrical grid is typically 230/120 Volts AC at 50/60 Hz. A bad design will rely on that and fail once something about that is out of whack in either direction. A good design will gracefully shutdown, or operate normally within reasonable thresholds.

      You can apply the same principle to programming (in fact, many programmers do). If your program falls completely fall and explode into pieces once any peripheral, call, timing, or whatever isn't as expected, then it isn't a resilient program. If it fails in unexpected ways while still writing data you might even have a dangerous program.

      So many programmers are happy with writing a program that works. That would be as if a car manufacturer would be happy to call it a day if the car manages to accelerate.

vzaliva 3 days ago

As someone who routinely formally verifies code, I can confirm that you should not trust yourself. I keep finding bugs and hidden assumptions in the most trivial code or even in code covered with unit tests. In my opinion, formal verification is the only way to truly trust some code.

jll29 3 days ago

This is a good post, and I agree with the "paranoid porgrammer" attitude being useful.

The post could also talk about yet another desirable paranoia, namely "Am I building the right thing?" - so talking to customers/clients/stakeholders/users is arguably one of the most important steps when seeking re-affirmation and controlling one's work. Nothing worse than people who think they understand a technical problem, but it's not the one that the customer wants solved...

cryptica 3 days ago

> Programmers Should Never Trust Anyone, Not Even Themselves

Yep. Can confirm, the best programmers are often paranoid. I guess over time your brain becomes more logical and you start to notice inconsistencies everywhere around you... Then before you know it, it feels like you're living on planet of the apes.

mistermann 3 days ago

> So if abstractions can be problematic, then should we try to understand a topic without abstractions (to know cars as they really are)? No. When you dig beneath abstractions, you just find more abstractions.

Demonstrating that language and colloquial "logic" are also abstractions.

> It’s turtles all the way down.

Memes and catch phrases are abstractions.

> These layers of abstractions go down until we hit our most basic axioms about logic and reality.

Reality too is an abstraction. Luckily all humans run Faith, and it runs invisibly, otherwise I suspect we'd have not made it this far. Though, it now seems like that which saved us may now take us down (climate change, nuclear weapons, other/unknown).

> Trust, but verify.

Haha...of course, just use logic and critical thinking!

megamix 3 days ago

There's a leaky reasoning about the abstraction hierarchy I think. Beyond, or until a certain layer - I know that I can trust the process. I'm fine baking, I trust the physical process.

simonw 3 days ago

This is why I try to have my code commits bundle tests along with any corresponding implementation changes. The job of the test is to PROVE that the updated implementation code did what it was supposed to do - both now and into the future.

If the test doesn't do that - if it still passes even if you revert the implementation - then the test isn't doing its job.

ramesh31 3 days ago

You do need a certain level of self confidence to actually get anything done though, and not get lost in analysis paralysis. The mantra goes "strong opinions, loosely held". Be prepared to vigorously defend your position, while simultaeneously being willing to immediately toss it aside in the face of overwhelming evidence.

keepworking 3 days ago

I think is kinds of "Need for cognitive closure". It is diffrent with "Never trust anyone" and "Everyone can be wrong". the NTA is make us can not to move forward. But allow the Everyone can be wrong makes we can move forward.

leecommamichael 3 days ago

What does it do to the psyche, to not be able to trust anyone?

  • Barrin92 3 days ago

    Nothing if you don't take it personally. There's 'not trusting' in the sense of recognizing that most processes or people are unreliable, including yourself and there's 'not trusting' in the sense of thinking everyone's out to get ya or not living up to your expectations.

    The second one largely has to do with ego and is the one that creates insecure people who usually hold others to different standards than themselves, the first is just a realistic view of the world and thus very useful

  • DoctorDabadedoo 3 days ago

    Cynism and burnout, but if you are in a dysfunctional it's hard to move away from that either way.

  • amelius 3 days ago

    Programmer syndrome.

  • mistermann 3 days ago

    Yet another very important portion of reality that science hasn't gotten around to yet. Perhaps we'll find the answer a bit further out in the universe, or deeper inside matter.

psychoslave 3 days ago

> verifying code correctness is impossible

Somehow, taking into account the state of our industry, yes. But this is not an absolute truth.

I mean, we do have the theoretical frameworks and even tools to come with solutions that allow to proof that code is correct. It’s just that mapping this "know how" with the "how to deal with the expected flow rate feature" is very uncommon.

  • tpm 3 days ago

    It is an absolute truth in the sense that it is true for all code that was not specifically written to be formally proven correct. So, given an arbitrary piece of code, it's impossible to verify it (because of the halting problem).

    • skydhash 3 days ago

      It's not impossible to verify it, it's impossible to automatically verify it with another program. Other issues are the underlying abstract machine (which can have its own bugs) and the surrounding environment (you either need to assert or/and sanitize your inputs).

  • n4r9 3 days ago

    > tools to come with solutions that allow to proof that code is correct

    I may be misunderstanding, but isn't part of the problem that these tools are themselves written in code and therefore subject to bugs?

    • agentultra 3 days ago

      This is addressed by the de Bruijn Criterion. The essential idea is that a small number of "trusted" rules should be enough to satisfy even the largest proofs. You have to keep the number of rules small enough that they can be reviewed and understood by humans so that you can trust the proofs verified by the kernel.

      • n4r9 3 days ago

        That probably helps to reduce the occurrence of issues, but I feel you are still ultimately relying on the correctness of proof assistants like Coq. And I am sure that bugs are occasionally found in Coq!

        • agentultra 3 days ago

          Indeed it does happen and it's unavoidable. You have to pick your axioms from somewhere and sometimes we pick the wrong ones or we find errors in our definitions. There's no "complete" or "perfect" system.

    • superidiot1932 3 days ago

      There are verified compilers such as the compcert compiler and also ways to verify that a given binary does in fact correctly implement an specification

dwighttk 3 days ago

The radical trust of turning on a computer

NoPicklez 3 days ago

"Trust but verify"

Also performing a critical self review and ensuring you remain skeptical

mewpmewp2 3 days ago

But then you say "trust, but verify" in the post.

  • fragmede 3 days ago

    trust but verify translates to trust no one but don't be a dick about it

    • nine_k 3 days ago

      To me, it's rather a postcondition instead of precondition.

      Trust that a person will do the right thing, but verify that the right thing has actually been done.

      • stavros 3 days ago

        Which is a contradiction, because trusting someone means not needing to verify what they say or do.

        • laserlight 3 days ago

          To the contrary, trust arises from being able to verify. Otherwise, you never know whether a failure is because of ill intentions or errors. That's why I very much dislike “trust, but verify”. I prefer “trust, and verify”.

          • stavros 3 days ago

            What's the meaning of "I trust you that the amount of cash you've given me is correct" when you then proceed to count it? If you're verifying, why do you need to say "I trust you"? A trustworthy and an untrustworthy person will get equally verified.

            • dncornholio 3 days ago

              Has nothing to do with trust.

              I trust that you will give me the money. I also trust that you counted the right amount. But I still want to make sure you did not make a mistake.

              • robertlagrant 3 days ago

                I suppose the question is: why bother with trust? Why not just "verify"?

                • t_mahmood 3 days ago

                  I trust you, about your intention and effort. But I verify because there are too many moving pieces in the world that can go wrong, that you may have no control over

                • dncornholio 3 days ago

                  Well the thing is, that’s how trust already works isn’t it? Trust doesn’t require any effort. Trust means I won’t put in effort. So I just verify, because I trust.

                  • robertlagrant 2 days ago

                    Okay, so why say it? Why not mention breathing and eating as well, given we're already doing them?

                    In other words: why is it included?

      • worble 3 days ago

        > postcondition instead of precondition.

        How can you verify as a precondition instead of a postcondition? You can't verify anything until the act has been completed.

globular-toast 3 days ago

> In reality, the bank does not just store the money we deposit. It loans away/invests most of the money that people deposit. Our money does not sit idle in a large pile in a vault.

Nope. In reality the money doesn't exist. Amazing how many people think that somewhere a cartload of money is physically moving every month when they get paid. When was the last time you deposited anything in a bank? The abstraction is even higher than that. It's just numbers in a computer system. The abstractions work because banks are "too big to fail".

  • IshKebab 3 days ago

    I don't think the author was under the impression that banks physically store money. They meant the digital currency does not sit idle on the bank's asset list. They're just using normal human language.

    • globular-toast 3 days ago

      The deposits aren't assets to the bank, they are liabilities. The loans are their assets (your liability is their asset). They don't need to "do anything" with your deposit because it doesn't exist. If I increment the number 2 to the number 4, have I brought something into existence, in particular 2 "things"? What is that? Why can't I just bring 3 things into existence without incrementing the initial number?

      If this sounds silly it's because it is. I thoroughly recommend anyone who is confused to a) do their own accounts and b) invent a silly currency inside your accounts and start a bank for your imaginary friends. Just add trust and government support to your bank and you'll be like any other bank.

      • IshKebab 3 days ago

        > They don't need to "do anything" with your deposit because it doesn't exist.

        Uhm yeah they do. If they take my money and then just do nothing with it then they won't make any money from it. Banks invest your money.

        (I don't think that's the main way they make money - it's probably mostly from credit card interest, but they definitely do it.)

        The money you pay into banks absolutely exists in every sense.

        Banks can create money when they issue loans, which I suppose you could argue doesn't exist. But they aren't allowed to create unlimited money. I'd say it exists as much as any other money exists.

        • globular-toast 3 days ago

          You're wrong, but you're far from alone. Essentially the way the whole finance industry works is there is an enormous gulf between their understanding of money and the general public's understanding. They don't contribute anywhere near as much to society as people think, in fact they fuck a lot of things up, but they run the books, of course they'll come out on top.

          There's a great intro to how banking really works today here: https://positivemoney.org/how-money-works/banking-101-video-...

  • xyzzy123 3 days ago

    This is going to sound weird but even if it was actual coins (say) would that make the whole thing any more "real"?

    Either way it's a symbolic system and it doesn't really matter if you execute it on an abacus, or in a digital computer, or by writing in a book and carting bags of coins around.

    I think the parts that are missing from most people's mental model are things like the role of the central bank, and the fact that lending limits are influenced but not determined by deposits.

    • JonChesterfield 3 days ago

      Yes, physical coins or grams of gold or whatever are more real. Primarily because they add inertia to changing the float, it takes time to find more gold or make more coins. You can't 10x the supply overnight. Things changing more slowly damps out oscillations.

    • globular-toast 3 days ago

      > This is going to sound weird but even if it was actual coins (say) would that make the whole thing any more "real"?

      Yes, because then there would be a physical limit on what banks can do. They would have to invent cloning machines or alchemy or whatever to do what they currently do. In other words, it would force bank to actually do the model most people (including the author) have in their heads, namely a bank stores cash and lends it out (called fractional reserve banking).

      In reality, without any such physical constraints, the banks are essentially free to "create" money out of thin air by issuing loans. That's how the money supply became 99% "bank money" and only ~1% physical money.

  • jampekka 3 days ago

    Indeed banks don't loan/invest deposits. Banks conjure money out of thin air when a loan is made. Some amount of deposits (a small fraction of the loans) is sometimes needed, although e.g. USA has scrapped fractional reserve demands.

    I guess people have hard time accepting this because it seems too absurd to be true. Also it's common to confuse money to be a scarce resource/commodity because that's how it looks like for most individuals (and that's the story usually pushed for us plebs).

  • ttoinou 3 days ago

    Agreed. But a bank failing doesn’t mean the financial system would be down though. We need to get rid of bad apples. Free banking

  • Vecr 3 days ago

    Fractional reserve backing is old school, zero reserve banking is where it's at.

mulmboy 3 days ago

> Failing tests indicate the presence of bugs, but passing tests do not promise their absence.

If only :)

Far too often I find myself working with tests that patch one too many implementation details, putting me in a refactoring pickle

  • yxhuvud 3 days ago

    Or even worse, tests that test implementation details that doesn't matter for the actual outcome.

    • hyfgfh 3 days ago

      If I had a dollar for each frontend test that actually don't actually test anything I would be able to retire by now!

      • orwin 3 days ago

        Tests that don't test anything come in at least two categories for me:

        - test that were useless, are still useless and will always be useless

        - tests that are currently useless but were used in the "wtf should i write" phase of coding (templating/TDD/ whatever you want to call it).

        I'm partial towards the seconds, and i like when they're not removed, because often you understand how the API/algorithm was coded thanks to them (and its often unit tests). But ideally, both should be out of a codebase.

      • bdjsiqoocwk 3 days ago

        Cargo cult testing. Some people don't understand the point of testing so they just go thru the motions and end up with something that makes no sense.

        • throwawaysleep 3 days ago

          Only needs to be management that misunderstands.

          I’ve written plenty of do nothing tests in my time to be sure that management regularly got a report of tests being added.

        • blitzar 3 days ago

          "It passed all the tests so it must be working, it must be something you are doing wrong"

    • inopinatus 3 days ago

      Used to be (perhaps still is) a nasty habit of Rails apps to have vast test suites covering every Active Record query they ever used (with fixed seeds to boot), rarely straying from giving the bog-standard and already very thoroughly tested and battle-scarred AR predicate builder a wholly unneeded workout; but none of their own front-end code because writing for selenium was too hard.

      But look! Thousands of tests and they all pass! Taste the quality!

      • yxhuvud 3 days ago

        > but none of their own front-end code because writing for selenium was too hard.

        I've also seen plenty of tests that test if a template was rendered rather than if whatever thing it actually outputs was in the output. It is just calcifying the impementation making it hard to test.

        But it is a tradeoff, and a hard one as well, because if you do all things all the time, combining all variations of database with all variations of the views, then you end up with a test suite that take forever to run. Finding the right tradeoff there has not shown itself to be very obvious, sadly.

    • germandiago 3 days ago

      One thing I do sometimes is to start part of an API in a TDD style. Everything starts very "basic", which adds a lot of relatively trivial test cases.

      When done with that phase and my API looks relatively functional, I remove all relatively trivial tests and I write bigger ones, often randomized and property-based.

      This works decently well and you do not have an army of useless tests there hanging after the process is done.

  • quectophoton 3 days ago

    > refactoring pickle

    Been there. Change one tiny thing, and 20 tests fail all over the place. But hey, at least we had ~95% test coverage! /s

    The more time some piece of code has survived in production, the more "trusted" it becomes, approaching but never reaching 100% "trust" (I can't think of a more precise word at the moment).

    For tests it's similar; the longer they had remained unchanged while also proving useful (e.g. catching stuff before a merge), the more trusted they become.

    So when any code changes, its "trust level" resets to zero at that point, whether it's runtime code or test code. The only exception might be if the test code reads from a list of inputs and expected outputs, and the only change is adding a new input/output to that list, without modifying the test code itself.

    Tests that change too frequently can't be trusted, and chances are those tests are at the wrong level of abstraction.

    That's how I see it at least.

    • olivierduval 3 days ago

      Just been beaten by a bug in a production system... hidden in code silently for more than 10 years !

      It just mean that for 10 years, this codepath has not been taken (The conditions for this specific error case was not met for 10 years) :-(

      Actually, it would be a good monitoring information to know which path are "hot" (almost always taken since the beginning), "warm" (from time to time) or "cold" (never executed). It could help build a targetd trust. I guess that it might be possible for VM languages (like based on JVM) because the VM could monitor this... but it might be harder for machine code

      • aunderscored 3 days ago

        This could be interesting. Unfortunately it'd be a performance hog to do. Some kinds of things do work with this (see performance guided optimisation in compilers)

lenerdenator 3 days ago

You sort of have to trust yourself, but verify it against what others expect.

That's the point of code reviews.

Trust me; I'm an expert on never trusting myself.