Ask HN: Trying to find a post about some OS developer in the 80s coding by hand
Hi, as the title says, I tried everything, checking my bookmarks and chatgpt but cannot find a post about someone telling the story of some engineer using pencil and paper for a month or so and then typing the code in one go and it worked flawlessly.
He was writing some niche OS system and the blog was a collection of posts about that system.
I also remember from the discussion thread that the developer had passed away.
This looks pretty close to your description: https://news.ycombinator.com/item?id=39342143
EDIT: Looks like the parent post entered the second chance pool (https://news.ycombinator.com/item?id=26998308). Not only did all the timestamps get rewritten, but apparently I can now edit this day-old comment :) Interestingly, it does show the correct timestamp when editing. Off-topic, but thought it's interesting behavior worth mentioning.
That was it, thank you very much! :)
Reading the original article, I can't help recalling what David Cutler said in an interview, that whenever he completed a piece of code, he always ran it in his head a few times to make sure that nothing could go wrong, and then submitted (presumably for testing). His NT code had very few bugs as per say.
I think 10x programmers are indeed usually smarter than ordinary programmers. They don't work out 10x amount of higher quality code, or 10x quality of code, but work in COMPLETELY DIFFERENT DOMAINS -- OS, Compilers, Emulators, etc., they simply solve tougher problems.
And they also have the capacity to hold a lot of code in their brains. This is so amazing. I usually lost myself after just maybe a couple of dozen lines, or a few function hops. Admittedly, my program uses more libraries so TBH the brain trace just stopped whenever it went into a 2000 line long library function. But my project is more trivial than theirs.
> whenever he completed a piece of code, he always ran it in his head a few times to make sure that nothing could go wrong
It's a bit disturbing to me that this is being voiced as a feat of some kind. Everybody should be doing this and most people are capable of it. I'm sure you are as well, despite your own pessimism. If you're not doing this, there's been some failing among your mentors. You can develop this skill with intent and practice and you really really really should.
That's what the programming part of our job is: not pasting in snippets from Stack Overflow, not asking Copilot for something that you hope is okay, not vomiting code until it gets a big green OK from your tooling, not gluing together library functions whose implementation and constraints we don't understand. You should be developing a clear mental model of your code, what supporting utilities it references, how those utilities work and what constraints they impose, as well as actively considering edge cases that need specific attention and the constraints your code projects outward to its callers.
You won't be able to do this consistently during your first couple years of working. And that's fine. But something's gone deeply wrong with your craft if you haven't gotten a handle on it once you're more professionally mature. But rather than feeling incapable of doing it yourself and in awe of those who do, you just need to commit yourself to practicing it as much as you can until it becomes second nature.
> Everybody should be doing this and most people are capable of it.
I agree with you that everyone should be doing this, but I disagree that most people are capable of it.
There are two points of arguments:
1) People get tired when running code in head, so I can't do it very long. What I observe is that 10X programmers can do this consistently, even when they are tired. John Carmack is a very good example. I won't say it's in the genes, but I suspect most people don't have a huge room to improve -- or, if theoretically they have a huge room to improve, realistically they don't
2) Even when I can run code in my head, when I'm in a good condition, I can only run MY part of the code, i.e. I simply cannot run any library code. I'm working on a simple Hex editor with C++/SDL2/ImGui, and running SDL2 code and ImGui code in my head is way above my capability. Last night I kept running a piece of code in my head and I couldn't really find ANY place it could go wrong, until I figured out that must be the scan code that ImGui is using, so I switched to a recent version (mine was from 2020) and the issue went away. What I observe from the 10X programmers, is that they usually work on very low level problems, so they have the benefit of not relying on external libraries. David Cutler is a good example here.
Like I suggested, developing the skill is largely a matter of practice and it becomes both more easy and more robust as you work to hone it.
It is not the defining characteristic of "10x developers" like Carmack. He's of an especially rarified sort and not really someone to look at as a role model in the first place because his experience and background is so idiosyncratic.
What we're talking about here is just an essential characteristic of "competent developers" and is among the array of characteristics that distinguish early-career developers from more genuinely senior ones. Your stepwise growth of the skill is very highly correlated with what value you'll provide as a developer (and therefore what roles and rates you can pursue).
Next time you encounter a frustration like you found in (2), don't just blindly update ImGui and hope for the best. If you think it might be the problem (a great insight because you were modeling it!), open your local version of ImGui and see if you can find where it's a causing problem, because you're now going to be very invested in reading ImGui closely (to find the issue) and you're going to learn so much that will help you better model ImGui in the future, as well as better model other utilities.
After you've put some effort into that, and hopefully even found the issue (but perhaps not), then go to GitHub and try to use changelogs, issues, PR's, etc to see if you can find some specific commits that might be related to the issue. Analyze those commits yourself and see what you can learn from them, improving your understanding about utilities like ImGui and how they're implemented and where bugs like the one you encountered might lurk so you can track them down more quickly in the future. Only then should you consider updating your local version.
And while you might be reading this and thinking "who's got time for that?!", it doesn't really take that much time once you get the hang of it through practice, and every time you do it, you're making huge investments into your own proficiency and value (-> future roles and rates). Don't skimp, just do it!
(As for #1 - "getting tired" -- aspiring athletes, whether professional or amateur, get tired during training too. And that's okay, because again, it's all a process of development and growth. To train, you push as far through the tiredness as you can, and then make the compromises you need to, always trying to push yourself a little bit farther before you do. By doing this, that wall of tiredness moves further and further out and you become that much more capable and productive.)
I did. I did look into the source code and wired gdb to go through. I should mention that in the reply but I didn't because I didn't figure out the why. That's why I guessed the scancode is the issue, because it never went into the if statement for BACKSPACE, but I didn't confirm that, neither did I change the scancode to see which value is the correct one, because I got really tired after a few nights of debugging.
Like I said I agree with you in principle. Sometimes I feel guilty not knowing the real issue instead of just glossing away. Wouldn't the 10x programmers want to figure out the root issue of this kind of bugs? But realistically -- this probably sounds defeatism, it burns me out quickly if I do that. It might just be that I'm genetically prone to be impatient and easy to frustrate, or be that I'm not mentally well, or whatever. I know it sounds defeatism so I would prefer not to speak these lines.
But again I must thanks for your reply, and will keep trying pushing a bit further every time.
I think AI can help reduce the burnout. Send it some code that you think may be relevant and ask about it and then verify after you see the AI's explanation. Of course AI can burn you out, too, if it doesn't find something useful but it can be a time-saver for this purpose sometimes.
Basically try to take advantage of when verification is easier than what you're asking AI to do for you.
Thanks. I did use AI for my first pieces of the code because I couldn't wrap my head around ImGui at the moment. I was very alert about using AI to write code though, because I still want to understand things. So I stopped using it during the last two nights.
I can't complain about the code quality as it's decent, but I just want to get better at reading the source code, I guess.
I can't run code in my head if I'm working at a high level where every function calls into thousands of lines of code. This comes up a lot at work.
If I'm writing 100 lines of greenfield code then yeah, anything is easy then.
From my original comment:
> what supporting utilities it references, how those utilities work and what constraints they impose
The responsibility isn't to memorize and model every function called up and down your whole stack. Often, you don't even have full insight into that if you wanted it and of course you couldn't hold that all in your head even if you wanted to. But you don't need to.
The responsibility is simply to thoroughly understand how each function you call works insofar as you're using it.
You should be confident, not hopeful, that the state you've arranged for that function call is a valid state for it, you should have a informed, not incurious, sense of its general behavior characteristics (fast or slow, high or low resource demands, thread safety, etc), and you should be able to make informed predictions about what its output should like given the state you pass in.
It's actual implementation will often be opaque, or at least opaque at some depth, but between the function's documentation, any access to its source, and your own insight of how something like that function would likely or necessarily implemented, you can and should be able to fully model it for the purposes of your own invocation.
That's the reason why I think getting the right position, right team and right work is the only antidote for this.
If you work close to the metal you only have 1-2 levels of abstraction. If you need to call a library which calls a library which calls a VM which calls some syscall you simply don't have the brain to trace down all those -- plus you are not allowed the luxuries of tracing it down because the ticket is wanted ASAP.
Getting the right job is the only thing needed to take you away from this hell.
There was a blog post the other day arguing that great developers aren't 10x but infinity-x. It sounds ridiculous, but their point was that great developers don't simply program faster but solve problems other developers are entirely unable to solve. And that makes sense to me. It's the difference between an AI boot camp graduate and someone with an ML PhD. They might apply a sklearn function at the same speed, but as soon as complex problems appear, the former hits an unsurpassable wall.
edit: https://www.seangoedecke.com/weak-engineers/
> It's the difference between an AI boot camp graduate and someone with an ML PhD. They might apply a sklearn function at the same speed, but as soon as complex problems appear, the former hits an unsurpassable wall.
I'm sure you're generalizing to make a point but I can assure you this is often not the case, I know several PhD's who have a very hard time dealing with real-world problems that any practical engineer would catch. There is a lot to be said for theory, but there is also a very hard limit.
> I know several PhD's who have a very hard time dealing with real-world problems that any practical engineer would catch.
I don't know the details, but this could exactly be the proof, not the counter-proof. They do different things, but they may not perform ordinary jobs very well. What if we ask John Carmack to play with CSS, move those pixels so the UI looks "perfect"?
You might be right, point for giving me a good goggle thinking of John Carmack pulling his hair out over CSS :)
Yeah would love to see that too...but Casey venting is good enough.
OTOH, I've interviewed folks with CS PhD's that literally could not write fizzbuzz. And that's given a head-start with a template program in an IDE, where all they needed to do was flesh out the core algorithm.
Not necessarily to disagree with your overall point. But having a PhD is maybe not always as significant as we think it is. Getting a PhD (usually) means extreme specialization in a (possibly highly niche) area, and may well leave someone without a lot of basic skills you might expect them to have.
I don't have a PhD, so I'm hesitant to comment, but I've always had the impression that it was more like grinding in some video game than it was anything truly intellectual. Then at the final boss battle, can you or can you not have ready answers for ridiculous questions.
I recall a psychologist or psychiatrist who challenged the notion of "defiant personality disorder" by observing two things:
(1) Many of these "defiant" people merely didn't trust credentials -- they were perfectly fine with authority who earned deference by virtue of proving they really do know what they are doing, and
(2) That the psychology/psychiatry profession in general, consisting of people who have their Masters and PhDs, have to "suck up" to a lot of credentialed authority, without question, to get their degrees -- and thus it's only natural for them to expect everyone to unconditionally respect credentials!
(For the record, I have a PhD, but it's in pure math, which is possibly simultaniously both the least practical and most practical thing you could possibly learn -- but as such, I'm tangential to engineering and physics -- and I'm pretty sure that all three of these fields have a certain "fine, you have a credential, but can you really walk the walk?" element to them.)
Right. My point isn't that doctors are particularly good at programming (they're probably not). I mean that some people have skills, or raw talent, that allow them to do what others simply cannot at all.
Fair enough!
Many years ago, when I was learning about pair programming, I remember someone (possibly even Kent Beck) saying that "Pair programming is kryptonite for incompetent introverts!" and I remember thinking, "Well, yeah, but I bet it's the haven of incompetent extroverts!"
While I haven't really been in forums debating the merits and perils of paired programming, I cannot help but be amused by this essay, that pretty much confirms this initial thought I had about paired programming!
I'd say my most valuable skill is identifying and making problems disappear completely, but it's more a result of having an extensive experience to pull from and pattern match against.
Do you happen to have a link?
https://www.seangoedecke.com/weak-engineers/
Yeah exactly.
I have been very interested in learning all kinds of details from the Archmages, so I gathered as much information as possible. From what I observe, great minds do great things.
Id argue that "running the code in your head" is something everyone should do to a degree. Just try and see which paths are interesting because they could potentially fail. Only then submit for review
I once had a (older) Russian colleague tell me about how he learned to program. He would write it out longhand, because access to compilers was limited to certain time slots in University. He know it had to work first time, otherwise he had to wait for the next chance to fix the bug.
I'm sure that was true for everyone back in the punchcard days. It would enforce a kind of rigor that I can blissfully ignore.
edit: I see the exact same story in the linked thread, so clearly a lot of Russians are very proud of that skill
Yes, this is common ground for "old" programmers.
Quite simply, when you had to walk across campus or at least to a different room to submit your card deck, wait (perhaps hours) for the results (printed on fan-fold paper, that again you had to go to a different building or room to pick up) only to find your program didn't compile due to a syntax error or didn't run due to a careless bug, you learned to "desk check" your code, run the program in your head, and be as sure as you could be that there were no errors.
Even when we got connected terminals, it could still take hours for your compile job to work its way through the queue of pending jobs, because it was in a development region that only got resources when the production queue was clear. You didn't use the compiler as a syntax checker in those days.
That all started to change when we got PCs or workstations, or at least good interactive multiuser development environents, and a "code-compile" loop or repl became part of the standard toolkit.
I have a old guy that I work with - PhD in Math (because CS didn't exist then) - who does lots of algorithm development with me. I often get Word docs of pseudo code from him. I'll do a search-and-replace on things like "LET" and "ELSE IF" and a very high percentage of the time if I run it in Python it works on the first try. Kind of amazing to me.
The hard part for me is then translating his ideas into vectorized numpy for speed, but at least I get the right answer to check against.
I sometimes wonder whether following some of these practices may promote more mediocre programmers, if they so wish, to become better ones.
- Think through and write on paper in pseudo code first;
- Run written code in head, or on paper if they don't have the capacity, a couple of times before pressing that BUILD menu item;
- Restrain from using libraries if a custom, better solution is possible;
But then I think, it probably doesn't make a lot of sense if you work as a frontend pushing out JS, or as a data eng writing Python code which calls some Scala code which runs in JVM which is highly complicated, because the whole industry is "AGILE" and the chain is way too long. You need to get into the right job, to nurture such a mindset. The old timers got lucky. They started with 6502 bare metal.
That's why I'm pushing myself to get out of Data Engineering, to do something lower level, until I get deep into the basement. I probably won't succeed here but it's fun to try.
Not sure if actually writing it out on paper is necessary. But along these lines I will often start my code by just writing comments explaining what the code in this file does. Then as I get into writing code, I break up the comments and insert the functions/classes below the comments that describe what they do. So sort of literate programming, but I don't usually go to the lengths that would really qualify that description..
I disagree about not using libraries. Libraries are almost always going to be better tested, better reviewed, and handle more edge cases than anything you come up with in-house. Of course if no suitable library exists you have no choice.
I agree with the library being better tested part, so that's why I think it's better to find a job that actually doesn't allow the use of libraries (or too many of them), than to try to go to the bottom in a job that has 5 layers of abstraction.
It's good to hear the literate programming thing. I sometimes do that on paper when I need to clear my mind. Basically code in English but with C type {} as scope.
I recently switched build tools because I couldn't get rid of a ten second delay in recompiling/rerunning my back end app when I saved a file.
I don't think this practice completely disappeared until laptops became commonplace. As late as 1996, I remember hand-writing several pages of assembly code during some down time at a conference; I had an idea I wanted to try out, but there were no computers in the conference center.
Great! Macroexpanded:
It Can Be Done (2003) - https://news.ycombinator.com/item?id=39342143 - Feb 2024 (137 comments)
It Can Be Done (2003) - https://news.ycombinator.com/item?id=18415231 - Nov 2018 (18 comments)
(re the timestamps see https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... - only the front page and /item pages show the re-upped time. Good point about the edit window, I missed that twist!)
Oh that surely must be it! Even if it isn't though, that is a neat little anecdote and I'm glad I've had the chance to read it - I hadn't seen it before.
Yep it was it :)
Note, one of the links in that article goes to the source code in question:
https://multicians.org/vtoc_man.html
I was puzzled by the source at first -- it seemed like Assembler, but was too high-level to be that -- and it looked too complex to be BASIC (or possibly JOSS, I suppose) -- I was a little taken aback to see the extension of the filename as "pl1", because I sort-of thought PL/1 was higher-level than that!
Was it the Story of Mel?
https://news.ycombinator.com/item?id=32395589
He also passed away relatively recently.
https://melsloop.com/docs/the-story-of-mel/pages/mel-kaye-cv
This was pretty common back then. I don't recall the post you are talking about but I wrote a little about my own experience here: https://blog.jgc.org/2013/04/how-i-coded-in-1985.html
When I was in college in 1995/96 (and later 1999/2000), this was somewhat how I programmed too. While I had access to computers at both home and school, my access to them were somewhat limited, and the tools I had at my disposal were very limited (especially at home).
I cannot help but reflect on how my approach was a "hybrid" between both pencil-and-paper and modern-cli-and-ide -- we were coming out of the age of really simple home computers, but not yet in the age of super fast computers with large monitors.
IIRC, Dijkstra's THE operating system for the Electrologica X8 was completed before the hardware was available. It was the first multitasking operating system (albeit with a fixed set of tasks) and the first use of semaphores. Supposedly the initial version only had trivial coding errors.
I worked with David Cutler and he very much did what he described doing. Others on the kernel team followed similar practices. There is code Dave and I modified together where we then sat and audited it together.
This was all code in the heart of an OS - thread switching, interrupt dispatch, synchronization mechanisms - things where even the most rare and exotic error might actually occur and cause a disaster.
But some hazard/cost computation is needed. There was an article in the '90s about a team doing software for an arm for space work (maybe on the shuttle) - they were hyper careful. I figured out that if all of windows had been made at that rate out of code output it would take 100 years to finish and would cost several trillion dollars. Not long after that that space arm suffered some kind of software failure, in space. Wasn't for want of effort by the dev team.
Remember that many errors arise from things outside the code you wrote/studied - some other code corrupted something, buggy behavoir in hardware, and so forth.
As for coding by hand, simulating by hand, flow graphing by hand, I don't think those were all that unusual, just one person took it to extremes and wrote about it.
Does anyone know if the original source as diagrams and pencil code is preserved somewhere?
It looks as if some of them might have been re-drawn for:
https://www.multicians.org/nss.html
and this is the source code in question?
https://www.multicians.org/vtoc_man.html
They used to have print templates for this sort of thing. Here's an example from the 1960s: https://try-mts.com/system-360-assembly-language-introductio...
We used to have to do this in high school in AP CS in the early 90s. All the tests were to write such and such with pencil and paper. We would get marked off for syntactical mistakes.
I had totally forgotten about this. I still have the PASCAL book.
I remember Mohammed Said (of Laravel fame) telling this story to Matt Stauffer in the Laravel Podcast. He wrote is programs at home on pen and paper and went to the public library to try them out.
This isn't really that uncommon of a thing. Just talk to your ex-Soviet friends. I worked with a Network Engineer whose computer time in uni was limited, so he did exactly this.
In Romania when I was a kid I had access to computers at some sort of kids' club only once or twice per week for a couple hours.
So I wrote BASIC programs in the back of my school notebooks and typed in some of them when I got to the computer.
Turing and Champernowne coded Turochamp by hand in 1948, but never executed it on a computer. The instructions were however executed with Turing as the computer:
https://www.history.com/news/in-1950-alan-turing-created-a-c...
I couldn't find the exact post, but it might be related to the story of Terry Davis, the creator of TempleOS. He was known for his unique approach to coding and his dedication to his project. Unfortunately, Terry Davis passed away in 2018.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
It can also be Mel
https://hn.algolia.com/?q=mel
Unfortunetaly is not any of them. Title was not very descriptive, a keen of "My friend Andre", but didnt find anything with those so maybe I misremember the name.
I didn't think this was Mel, but I also came here to talk about Mel. Hand-selecting memory addresses to ensure that the next thing you need is always the next thing about to be read from disk is just craftsmanship on an entirely different level.
I think there’s a story in the Bill Gates biography “Hard Drive” about when he flies to MITS in New Mexico to show off BASIC (I think), then realizes he forgot to write a bootloader, so does it on the plane on a yellow legal pad. Toggles it into the Altair during the demo and it works on the first try.
(Might have a lot of details wrong)
This was extremely common in the 80s.
I've got a pile of notebooks full of hand assembled Z80 code that my dad wrote in pencil for the Exidy Sorcerer, which he got in 1979.
It was easier to do that, and reason about your program on paper before running it on the actual computer.
I have the same problem finding an article about meta problem solving. The writer compared it to skiing or judo or some sport. No combination of keywords conjures up the right article in search.
Have you tried using Hacker News Search?
https://hn.algolia.com
No luck as I dont have a keyword in mind of the title. I remember in the blogpost that in the very end he had to fix 1-2 typos and everything worked correctly.
The blog (in old school design) had a lot of posts and pictures of that era and the people involved building that OS.
Ah yes, the web burns shortly after we shoot it and we left no bread crumbs to navigate the forest. It is true! We used pens, pencils and graph paper to write down and draw all kinds of things while using the computer. Paper has very robust IO. A single note book can hold as many glyphs as ship full of clay tablets but after 40 years you have no idea where you've put it.
I hope @dang remembers? :)
why does this make me immediately think of templeOS