Hacker News new | past | comments | ask | show | jobs | submit login
Advice for new software devs who've read all those other advice essays (buttondown.email/hillelwayne)
399 points by BerislavLopac 4 months ago | hide | past | favorite | 345 comments



I've been working with one junior and one not so junior programmer on a hobby project recently, and they are both "Right Way Guys". For a roughly 300 LOC project with a discord bot and some rust code that only we will be running for now, they insisted on complete documentation, separate VMs for QA and "prod", systemd deployments, a templating system for a few strings, an ORM layer for four (4) SQL queries.. this is a project that maybe 10 people would use. These are only half the requirements for the "0.3" release with much more over engineering planned for the future. I have stopped working on that project :)

It's frustrating and sad to see people do that. I was myself a "Right Way Guy" at the beginning of my programming for a few years, before I learned how much depth there is in CS besides junk like best practices and code style and how to focus on the only thing that matters - working code. They are often too convinced of their rightness.


The situation you described doesn’t sound like “Right Way Guys”. It actually sounds like “Bikeshedding” [1]. This means giving a disproportionate amount of attention or importance to the trivial details while neglecting or giving less attention to the significant issues.

Imagine a committee commissioned to approve plans for a Nuclear Power Plant. But the committee spends all their time discussing the color of the bike shed that they want built nearby.

In your case, their focus on separate VMs for QA/Production, systemd deployments, templating system for a few strings, and an ORM for a few SQL queries, especially for a project with a limited user base (10 people) really exemplifies Bike-shedding.

They’re emphasizing minor, arguably unnecessary details rather than the core functionality or purpose of the project [2]. This usually occurs because these trivial aspects are easier to understand and discuss, especially for junior devs, which leads to increased involvement on minor details while the more meaningful parts of a project (which might be more challenging to address), are overlooked or given less attention.

IMO, a good leader knows how to strike a balance between the “Right Way” and avoiding the pitfalls of “Bike-shedding”.

[1] https://en.wikipedia.org/wiki/Law_of_triviality

[2] I would argue complete documentation of the meaningful parts of the project is not bike-shedding.


Bikeshedding is the act of debating trivial details. They're just overcomplicating something that should be simple, but it doesn't seem like there was much of a debate about it at the time, so it's not bikeshedding.


It sounds more like a case of "Use the tools you know". Like, the stated ORM toolkit may be complete overkill, but if they don't have experience doing object relational mapping by hand that's one more thing to learn to get the job done. The asserted goal is to deliver something for a group of people to use, so it's not a hobby of trying to find as many new lessons to learn as possible.


I see it funny when this behavior is assigned as one of symptoms of quite quitting, and Allies advice to rebellious German workers in WW2.


Sounds more like scope creep (due to "Right Way") rather than arguing about details.


Not having separate VMs for QA and Prod seems like madness to me


which model regurgitated this reply ?


I'd argue that they're not completely wrong in doing those things.

Many of those things you list really don't take too much time to do, like writing systemd units or using an ORM. But they really help when anyone needs to take a look at things in the future or someone else wants to contribute as well later on. Besides, they're easier to do when things are still fresh in the mind, and these kinds of chores rarely get done later on when a project has grown.

This being a hobby project may also be a reason why the other programmers want to do things right; they may get satisfaction and learn new things by doing it this way!


> This being a hobby project may also be a reason why the other programmers want to do things right; they may get satisfaction and learn new things by doing it this way!

Exactly my thought. Hobby projects are great opportunities to practice those skills since, if you don't, they don't manifest.


In the movie Primer, a group of four friends start a small computer business on the side. Two of them later invent a sci-fi box and then dramatic shenanigans ensue.

Something I missed at first [1] was the significance of an event that happens early in the movie. They need to buy a $50 router because their existing one is broken. They have an impromptu meeting where one of the characters gives suggestions on how they could try to fix the existing router. The suggestions are all half spoken before being answered: "Did you try the ..." : "yes"; "well , how about resetting the ..." : "I tried that too". Clearly they're basically reading each other's minds because of how well they know each other and how much time they spend with each other.

But then comes the 'punchline'. One character says 'yes' to buying the new router. The other response with "I need an 'aye'". Their side business apparently has very strict rules about what words they need to use for voting on decisions. It doesn't matter that they all know each other so well that they're finishing each other's sentences. It doesn't matter that this is a $50 router. And it doesn't matter that their income from this side business is inconsequential. The rules say that they need an 'aye'.

My own theory with "Right Way Guys" is that some people have been able to find a lot of success by leveraging the knowledge that's stored in the hivemind of society. They don't really know what they're doing when you consider what's going on inside their skull. But they have successfully copied success up till now. The plus side is that they're able to inherit successful methodologies that have survived over time without having to do all the hard work themselves. The down side is that they literally don't understand when they're in a scenario where it will lead to failure. [2]

[1] - I didn't think much of this until I watched the director commentary. Where they explicitly talk about how they intentionally setup this interaction to showcase how certain characters took the rules too seriously and which ones didn't take them seriously enough.

[2] - My theory on my theory is that this is also why so many people encourage others to join in their favored cargo cult practice (be it software development or anything else). Social behaviors may or may not work, but the more people you can convince to follow them the greater the chance that when they fail catastrophically and/or fatally they're failing for someone who isn't you.


I over-analyze everything and if I actually explained all the reasons the screwdriver should be put away, you would be bored, possibly to death. If you persist in arguing you might hear half of it.

And yet my toolbox is still full of things I do because someone I respect did them, or someone I disrespect didn’t, and the outcomes were dramatic. I know the expected outcomes, I have half an idea how they actually work, I just know it’s a magic spell that gets me into or out of problems. We don’t know why asking dad for ice cream works more often than mom, but empirically it does.

There are areas dictated by human factors like errors, or overconfidence. There are areas I will sweat blood to do a mindful task to avoid a mindless one, and vice versa. And there are things I do because it improves outcomes with neurodivergent people, and sometimes neurotypical ones.

There’s a concept in chronic illness circles called spoons. It is a metaphor that acknowledges that it’s not time that’s the constrained resource, it’s energy. A wake up call that software desperately needs. When you have used all your physical or emotional energy you are “out of spoons” for the day. You can’t do anything but veg.

This is slowly being replaced in psychology circles with metaphors that are more like deck building card games, because they speak also to many other groups of non-extroverts. If you aren’t prepared for a task like calling tech support or dealing with a toxic relative, being forced to do it now burns all the other tasks you might have done cheaply today. These people want to know they have a dentist appointment or a date in two days because they need to psych themselves up. Spend energy today and tomorrow making sure they have that card available. And if there’s a cancellation, they are out all that energy and have to spend it again.

There’s a lot of this in our work too. The old joke aphorism about, “why spend a day doing something you can spend a week automating?” falls under this umbrella.


I call this effect “sand in the gears”. Some things are low friction and effortless, others less so. Too much friction and the “machine” seizes.

Psychological aspects are a big part of this.

My pet peeve is overzealous security trolls making IT staff use four layers of VPNs and remote access solutions with multiple glacially slow MFA authentication prompts.

I watched one guy typing into a console with a two second lag on each key press. That’s the round trip time to the Moon and back!

I felt bad every time I had to ask him to do something for me, because his shoulders would slump and he’d have this depressed look on his face as he forced himself to jump through the hoops… again.


Had a conversation with a friend yesterday about a case [1] where students at Oxford spent 500 years swearing irreconciliation with a guy from the 1200s as a part of their graduation vows. When it was reviewed in the 1600s, someone suggested removing the clause, but it was turned down. Rules and traditions have reasons, but it is interesting to think about how many we follow all the time without really understanding why.

Cultural DNA (traditions, rules, and the 'right way') encode a lot of useful experience - but environment and circumstances change. And sometimes it just picks up weird things along the way.

[1] https://news.ycombinator.com/item?id=38710661


There was a moment when the cultural equivalent to genes was called memes



You sound like you saw the phrase "Right Way Guy", felt it sounded pejorative (it's more gently poking fun at a common thing), and decided it must mean That Type Of Guy I Don't Like.

The article is referring to the foibles common to someone who's just had their first experience with not being a cargo cultist, i.e., getting overexcited about their first taste of real understanding. Mostly these foibles involve having somewhat cringey one-sided conversations.

I have no idea how the Primer anecdote relates to anything at all.


| You sound like you saw the phrase "Right Way Guy", felt it sounded pejorative

I'm not sure what it sounded like, but I'm pretty sure I didn't feel that way.

| decided it must mean That Type Of Guy I Don't Like.

... huh. That escalated kind of quickly.

| I have no idea how the Primer anecdote relates to anything at all.

Well, I had hoped the paragraph following the anecdote explained that, but I guess I can't win them all can I.

EDIT:

What I find super interesting is that my original comment was that there are some interesting success and failure modes associated with "Right Way Guy".

And I follow it up with a musing that social pressure to conform is a way for the social hivemind to protect adherents via statistics.

THEN one of the comments looks suspiciously like it's chiding my social disharmonious actions.

Very cool.


From your original comment, I learned some new things about a movie I enjoyed but haven't thought about in a while. I felt it connected fine to the topic, and made for a really interesting contribution to the discussion.

I've always been intrigued by how the representation of those characters instantly connected so well with my engineer-identifying brain, and little details like that must have been a big part of it. It's a neat little nuance on the idea that what bits of cargo cult a person exhibits _now_ can sometimes give insight to their current place in life, and perhaps even to their deeper character and heart.


I'm sorry I replied; I thought you were confusedly misinterpreting the article by reading something about cargo-cult practices into it, but I now think you're just sort of emitting nonsense, and so it turned out to have been pointless.

edit: Actually, on reflection,

>THEN one of the comments looks suspiciously like it's chiding my social disharmonious actions.

I consider this extremely aggressive, rude, and to bear no relation whatsoever to anything I typed. I don't understand what causes people to talk in bad faith like that.


> I don't understand what causes people to talk in bad faith like that.

Okay I guess I'll take the bait and assume you are legitimately confused and that your original comment was in good faith.

When you say something like

> You sound like you saw the phrase "Right Way Guy", felt it sounded pejorative (it's more gently poking fun at a common thing), and decided it must mean That Type Of Guy I Don't Like.

you might think that no one should get offended by this because you are being objective about your thoughts. But it doesn't matter that you're "just" sharing your thoughts. These thoughts are hurtful to the parent as evidenced by their reactions. They're hurtful because people don't like being wrong nor do they like being told that they're upset. If this isn't clear to you, then you have my deepest condolences because I expect you often unintentionally offend others.

If I may offer unsolicited advice, you can avoid this kinds of unpleasant reactions by mincing your words. I know a few people who speak like you did and they all are so confused when I tell them this because they think readers would be offended that they are padding their message with unnecessary formality and fluff. But without it, their message is often curt.

I see this interaction play out all the time on HN (including my response), so I hope my response is not in vain...


No, I meant that part to be a little mean, and I wouldn't be upset if the other poster had taken offense at that; it's a tradeoff.

I'm offended that they emitted actual word salad at me followed by a random (and actually wildly offensive! or it would be if it made any sense) accusation that was unrelated to anything even close to anything I said.

That's what I mean by "bad faith".


I really respect the effort you've made here to communicate with some empathy and to try to be a help.


Ha, reading first part I got incorrect feeling you are actually proud of that approach, I thought 'geez not again this'. Luckily I read till the end :) I see it all the time, well most of the time these days. I call those folks smart juniors, and it literally doesn't matter if they have 1 or 20 years of experience.

Smart juniors know most if not all design patterns, read Mythical man month, and so on, and... is super eager to try new technologies, languages, frameworks and is literally running around with shiny new hammers desperately looking for anything resembling nails. And then they do some stupid decisions because they don't grok the power an optimized boring old DB can bring in.

Then comes a guy like me with 20 years of experience who has seen damn well how such projects end up 5-10 years down the line with half if not most of the original team gone, and cuts half of that crap out while keeping genuine added value, because we are not running kindergarten for devs who want to have fun at all costs all the time, we work for our business who pays us and expects good, stable and relatively quick deliveries, rest are details on our side. You can't be taken seriously by business if you behave like that, no matter how good your intentions are.

IMHE bleeding edge stuff and trying new things for the sake or avoiding boredom will consistently break more down the road than it will help fixing. We incorporate technologies which whole team can grok and be proficient in, not just some bored superstar. Build your CV elsewhere if you desperately want as many technology entries as possible, there are companies like that, and good for them. IMHO these folks anyway don't stay around for too long so full added value is even questionable, the grass is always greener elsewhere, at least till they get there. Could be boring for some, I call it seasoned (yet still endless amount of stuff to learn, but that's fine this won't change till my retirement).


Building things the right way is a chance for people to learn tools and have fun. They're more impressive if your goal with a side project is for companies to look at your GitHub page.

Documentation is crucial. I have some fairly basic <300LOC projects including a bot and a web scraper that I currently cannot run because I didn't document them. The Raspberry Pi they were running on died and I need to do....something....to fix my headless virtual framebuffer and debug some cryptic errors that do not find productive search results.

If I had containerized or, at a minimum, documented my setup steps, I would not have this problem.


What's tricky is that in a different context, those people would be excellent contributors. Like in a mature product, who wouldn't love a developer who works on documentation, builds out a staging environment, and makes the codebase more scalable? But it just doesn't make sense when you haven't shipped anything yet.


I disagree; there is almost never a "right way" and pragmatism and reality-based outlook always trumps "right way"-ism. This is especially true for mature products, which almost always have some history behind them: shifting requirements, "seemed like a good idea at the time, but with the experience we have now it probably wasn't", shifting trends in the industry, constraints in terms of dev time or budget, etc. Often this is sub-optimal, but it works well enough, so it's fine.

And never mind that opinions on "the right way" differ. Previous poster mentioned ORM: some people think "the right way" is to never use an ORM, some think "the right way" is to always use it.

Right Way Guys will insist that your codebase will always needs to be scalable, whether it makes sense or not. You've got a B2B product that will only have a few hundred customers? Doesn't matter. It needs to be scalable. It's the Right Way.

Right Way Guys will insist that this kind of ugly module that hardly sees any changes and is basically bug-free will need to be rewritten to The Right Way once they add a minor trivial feature. It doesn't mater it works fine. It's The Right Way.

Right Way Guys make things worse. Always.

In the case of juniors: they can be taught. They're just juniors. That's okay.

In the case of seniors: good luck... I'd argue these are among the worst people you can hire.

And you really don't need to be a Right Way Guy to write a few docs or set up a staging environment.


A balancing act between premature optimisation and technical debt.


I'm actually not so convinced there's a lot of balancing in most cases.

Most of the time, just do "the simplest thing that will work" is actually quite future-proof, because when (if!) it needs changing then this is usually not too hard, because it's simple. It's usually not too tricky to make something simple more complex, and the extra costs over "make it complex from the start" should be quite low.

What I do see is people just writing bad code. "zomg this function is 5,000 lines long and nested 9 levels and I can't make head or tails of it, but somehow it magically works, kind-of, with bugs, but no one really dares to touch or refactor it because the last two times we introduced regressions and had to scramble a fix and there are no tests, and adding tests is hard and requires refactoring which we don't dare". That type of stuff. Not an hypothetical exaggeration either I'm afraid :-(

But bad code is just ... bad code. People call this "tech debt" but it's not – it's just bad code. Probably took more and not less time to get that crap to work in the first place compared to if you had done it right.

I think one of the major mistakes the Right Way Guys make is to "solve" this by adding patterns and architectures and whatnot. But the solution is to just not have bad code like this.

I've seen all of the above play out more than once, with different companies with wildly different tech stacks.


Absolutely and this is basically the entirety of the debate around best practices boiled down to a single word - context. There is no meaningful way to make decisions about software without the higher level context.

To be fair to them, maybe they just care more about learning new stuff instead of shipping - so in their context all of the stuff they do makes sense. For me i know how to write good code so am no longer interested in doing it, except when it's necessary :) First you must learn how to write great code, then you learn when to write simple code instead.


What's even more tricky is that almost all (successful) projects start out small and it takes immense discipline and foresight to catch up on "maturity" at the right stage in the project.

Chances are, you'll get bogged down with these "maturity" things from the start and never build anything successful, or you'll go fast (perhaps catching a glimpse of success) until nobody can keep working on the project once those "maturity" things start to matter.


I had to realize that for people like this the "Right Way" is the hobby, not the Discord bot.


This is exactly it. It's fun to try out the new tools and how "effortless" they make it. It's a kind of tinkering.


I initially was expecting you to make excuses like "yeah, our code has undefined behavior, but it works, and only 10 people will use it". This is how I've seen the "but it works, just get shit done" attitude employed among my programming peers. People use "but it works" to excuse things don't don't actually work.

In the end, I think you're right though. As long as the code is reliable, then the shortest code is the best code (within reason, don't start playing code golf). It takes an awfully good abstraction to beat simply having less code.


I kind of assume the opposite problem. If you start with code and then add testing and documentation you may never committed to leave broken things to maintain otherwise inconsistent alternative sources of truth. If you try to maintain these things from the start you probably have a lot of errors that no one is authorized/competent to fix without leaving behind references to incorrect truths, even if it is only in a less frequent contributor's mind.


When you're a junior engineer overdoing it like this on a small project can be a good way to practice new skills and see how they apply through a project's whole lifecycle.


Remember juniors have it tough in the employment market, so some of this could be due to RDD, so they can tack on SQL, ORM, managed vm infrastructure, on their resumes.


> I was myself a "Right Way Guy" at the beginning of my programming for a few years

Maybe it's a case of known the rules so you can break them.


Are you sure that all this infrastructure is not, to them, the actual point of the project? Sometimes it’s fun to go way overboard on engineering for its own sake. I saw one guy who’d built an entire PLC-based control cabinet to automate a cat door. Necessary? No but that’s beside the point. :D


The reason they're doing that is because theyre going to be tested on that knowledge when finding employment. It's harder than ever to be full stack, but there's plenty of competition.


Very good article and I hope more people read it. Over my career of 15 years in software (data management), I have learned exactly that. One other thing that really helped me was something that I learned in my Theravada Buddhist upbringing: https://en.wikipedia.org/wiki/Kesamutti_Sutta

> The Kesamutti Sutta states (Pali expression in parentheses):[5] Do not go upon what has been acquired by repeated hearing (anussava), nor upon tradition (paramparā), nor upon rumor (itikirā), nor upon what is in a scripture (piṭaka-sampadāna) nor upon surmise (takka-hetu), nor upon an axiom (naya-hetu), nor upon specious reasoning (ākāra-parivitakka), nor upon a bias towards a notion that has been pondered over (diṭṭhi-nijjhān-akkh-antiyā), nor upon another's seeming ability (bhabba-rūpatāya), nor upon the consideration, The monk is our teacher (samaṇo no garū) Kalamas, when you yourselves know: "These things are good; these things are not blamable; these things are praised by the wise; undertaken and observed, these things lead to benefit and happiness," enter on and abide in them.'

That advice is something that I found very useful in life even though I have become an atheist since I was in 8th-9th grade.


Even though what you've shared here resonates with me, after googling it I seem to have discovered that this particular translation is contested. The claim seems to be that someone distorted their translation to be more rationalistic.[0]

Which is a shame if true, because I sorely wanted to believe that the Buddha was this, um, enlightened.

[0] https://fakebuddhaquotes.com/do-not-believe-in-anything-simp...


I have a hard time understanding what you mean by that, could you put it in simpler terms?


Alternative rephrasing from a source without a citation:

"Do not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merely on the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations.

But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it."


I have not seen the like nor studied the field but interpret it as internal over external knowing. personal experience over dogma of any stripe.


I thought that might be a possible interpretation, but then personal experience is just an anecdote and also not a thing when you're just starting out


Most people underestimate how valuable personal anecdotes are compared to sampling/survey data. Has it ever happened to you that a restaurant or other experience has thousands of rave reviews online (or even among your friends) but when you go there it's shockingly underwhelming? That happens a lot. Your personal anecdote is almost always more valuable in such cases because of several reasons including recency, authenticity, self-honesty and the usual problems with review aggregation platforms.


Food is a very subjective thing, online reviews generally are often intentionally or unintentionally distorted, and quality/staff at those places changes too over time. Considering all these factors, that does not make this specific scenario a great more general example


It is Buddha telling the traders that they shouldn't trust whatever is told to them even if it comes from the so-called experts/gurus, family/friends, teachers, traditions etc. He told them that they should try to reason out the things being told to them, give these a try in their lives and only keep them if these suggestions/advice works for them. A more simplified version can be found here too: https://www.ling.upenn.edu/~beatrice/personal/buddhist-pract...

Hope it helps. :)


> But also don't worry too hard about getting "tricked" or learning "the wrong thing".

Did I learn TDD when it was hip? Yes. Do I use TDD? No. But did TDD teach me how to write better code? Yes.

Same with languages I've learnt but not ended up using professionally. Everything teaches you something. You can't find a good middle ground without stepping a bit too far in both directions.


I think we all do a bit of TDD, just not the extreme kind. If you already have the code and now you're trying to extend it, thinking about how this is going to be done and writing a few tests around the usage makes sense. It also makes sense if you don't have the code but you do have a way to structure the code in your mind and you can figure out what parts you can build and test first.

So while it's not TDD in the pure sense (it's more like Test Assisted Development) it is leveraging the spirit of TDD.


TAD has just about the right sound, too. I've found it's a useful tool for very specific tasks. I found TDD perfect for writing functions or modules with a wide variety of possible state-dependent outputs. While not strictly TDD, I've also found it helpful to sit down and frame a list of desired behaviors for a piece of code as expectations first, then go through the process of writing the code and tests in tandem. It's nice when you know exactly what you need to do, but not how to do it. It's much less helpful when you know the how but not exactly what.


Yeah. To me it's about not being a zealot about it. If I can get 80% of the value with 20% of the effortby tweaking it I think that makes sense.


"Just a TAD is all you need."

https://www.merriam-webster.com/dictionary/tad


Could have been worse. Could have gotten: https://www.urbandictionary.com/define.php?term=Tad

Hmm... Wait. Is that worse?


In a sense, even the way I use print statements and other outputs to validate the behavior of a non-TDD program at each iteration could be called test assisted or something. It's just in that case, the test is the whole program at each iteration, rather than something that is fully automated and decoupled from its normal operation. Each test then expands in scope with the scope of the program, not being preserved except perhaps as assertions or log messages.


Sort of. The way i'm thinkjng about it is that you do write the tests and the final output would resemble what you woukd get with TDD, it's just that you put a little more thought into the structure of your software before starting to write the tests


I get what you mean. I was stretching the idea a lot. What I described might be more aptly called "iterative development", and is really basic. There is an even further extreme than that from TDD, though. It's of course possible to write a program without even ephemeral, manual tests, just intuiting how it will work as you build it instead of checking and iterating... but anyone with experience knows better than to try that beyond trivial programs.


Yes. I have a name for your method also: debugger free development (DFD!). While controversial with some my theory is that if you need to attach a debugger to understand what is going on with a live program you've already lost. Logging (ie printf++) and metrics should allow you to understand what happens at all times. You may need to adjust the log level but that's about it.


Couldn't agree more. Learning bad things - not that TDD is bad, or anything, but over committing to it might be - is a huge part of understanding why certain approaches are better or worse than others. Although, I suppose 'making mistakes is the path to learning' is already conventional wisdom, so.


> Do I use TDD?

Sometimes. If the project/feature I'm working on lends itself to TDD, I use it. Generally speaking, all this guidelines and principles are good to know. They become a problem when people take them religiously. Ideologists are really a plague in our profession.


> You can't find a good middle ground without stepping a bit too far in both directions.

This is beautiful. So short, so apt, so instantly relatable and palpable.


Great article and advice!!

> Eventually the honeymoon will end and you'll learn that programming is frustrating and messy regardless of which Right Way people use, and that you can also make great software without doing it the Right Way. Over time you'll learn fifty other Right Ways and learn to mix and match them to the problem at hand.

My impression though from last few years is that nowadays many developers get stuck being Right Way Guys and are unable to broaden their knowledge and views. This is sad and I do not know the cause of that if it is true. My speculation is that it has something to do with too short attention spans nowadays to efficiently expand knowledge in combination with being to comfortable in their current positions. Or maybe something with too much incentives to only learn specific frameworks and not basic knowledge of how things work “under the hood”.


My guess is that professional developers are too concerned into getting into their new job next year. So they both don't have time to learn the lessons from this one (because they'll be out by the time those lessons hit) and they can't afford to not focus on the new great thing.

That really impedes growth.


My 1¢ of advice: State is the enemy. Minimize state wherever you can.

This includes state as in your code, state as in how many things you need to hold in short term memory to do your job, state as in how many project specific details you need to remember, all of it. State is the enemy. If you can derive it from first principles, always try to do so.


I cannot stress enough how much you can reduce "how many things you need to hold in short term memory to do your job" by just keeping a notebook (or file, or wiki, or whatever works for you) and writing things down.


It is amazing to me how many people do not write things down. It's like everybody wants to turn every job into a contest to see who can spin the most plates in their head.


Your engineering notebook is a gift to future you.


I've been writing everything down in a notes app recently and I love it. If I think of a good Christmas gift for my brother in July, it goes in the app. If someone recommends an interesting-sounding book, it goes in the app. If an app at work has a specific or convoluted build process, it goes in the app.

As someone who spent his entire childhood getting in trouble for forgetting things, it's been life-changing. Computers remember things so much better than my dumb brain.


What app do you use?


So, don’t derive it from first principles each time?


Just the first time


Therein lies the wisdom


Some of these go beyond just advice for programmers. Everyone should reflect on the first point: most authors and essayists are read because they are good writers, not necessarily because they are experts on the subject matter.

Take walks!

Try different types of work. Learn about other job functions at your company. In big companies in particular, you may be doing things that just have to be corrected or worked around somewhere else, when you could make things easier for everyone.


The first item is also why it's good to be skeptical about things you learn on video sharing services like YouTube too. If you sound credible and present your ideas in a professional way, people will take them seriously regardless of how flawed they might be. In a world where a fair few 'essayists' have been exposed for just reading Wikipedia articles and Reddit posts, it's definitely good to try and not associate how people present their ideas with how accurate said ideas are.

I'd also expand on point 10 to add that talking to these people often gives you a better picture of what problems your software/dev work is trying to solve, and can help you identify pain points that you may not have noticed yourself. Remember that with a (very few) exceptions, software isn't written for the sake of it, it's written to solve problems.

Either way, nice list! Definitely has some useful advice there.


This comment just made me consider LLMs as similar to the essayists you mentioned - they read the "right" answer somewhere on the internet/during training and then can regurgitate it in a way that makes sense without consideration of how true it is in practice.


Not even the right answer much of the time, just information that sorta matches the right patterns as requested by the user. But LLMs say things in such an authorative, intelligent seeming way that it's indeed easy to forget they have no idea what they're actually saying or how accurate it is at all...


Humans were hallucinating responses given a prompt long before LLMs were a thing. Just look at any comment thread.


Certainly! It's generally considered a negative trait for LLMs to hallucinate but to say a writer is doing the same thing is kind of just implying they are a bad writer with a surface level understanding of their topic. I believe it's the confident delivery that's the issue, or the improper logical synthesis of ideas because they "seem" to be related.


I'm such a fan of the debugging book[1] recommended in the article that I bought a bunch of copies to give out. I've given copies to friends, interns, data scientists and even executives, and everybody who's read it has appreciated it. The book has a small number of simple techniques that are great for solving problems in any large technical systems, so it's useful for almost everybody. It's also an easy and engaging read, with lots of "war stories" providing color and context for the otherwise-dry suggestions.

[1]: https://debuggingrules.com/


that's great, do you still have any to give out? i don't usually read books but i am trying to take it up lol. this may be a good way to staryt


Did...did you just ask to get a free copy of the book?


> People don't listen to me because I'm a good programmer, they listen to me because I'm a good writer.

I think this point extends beyond the idea of guarding ourselves while reading the contents of good writers; it's also about how we should approach our jobs. Being a good writer will most likely improve your skills in dealing with other people. As software developers, writing and communicating with others is crucial to our work.

I would also recommend beginners to write about the challenges they have encountered, their experiments, and their thought processes, among other things. If possible, they should write essays. This will prove to be a really useful skill later in their careers.


The one I have a hard time explaining well enough for new folks to get is: "It is not your job to write code."

We use code as a tool to solve problems. Code is the mechanism of achieving our goals, not the goal in and of itself. If we are coding just for code's sake, we'll deliver the wrong results. We need to be focused on solving problems, and if we aren't sure what problem our code is solving, we need to stop coding and figure that out.


An interesting corollary to this that first starts hitting hard at the early mid career level and never quits:

"It is not your job to write slack messages". (Or emails, depending on the company.)

It can really feel like it is! Very much like the code one, it often feels like, well, this is the work; answering questions, collaborating, influencing the direction of the team and organization.

And communication is indeed important. But again, much like code, it's a tool that is used to create useful things, it's not itself the useful thing.


> We need to be focused on solving problems, and if we aren't sure what problem our code is solving, we need to stop coding and figure that out.

I am not sure whether this is a good advice - for a quite subtle reason: I do claim that most code (including most of the code that is written "for code's sake") does solve a problem, and most programmers at least unconsciously are aware of this fact - and that's why they write this "code for code's sake".

The issue rather is that other people (say, "the suits") want that the code solves a different problem than the one that the code solves.


Assuming tue devs don't live in a vaccum, it is other people eho define the problems to be solved. Doing so consiously, and not unconsiously by chance, is what engineering does. Producing something to produce something, while ignoring the problems you where handed to solve (doesn't matter if it is clients, customers, regulators or management), is pointless.

The tricky bit is figuring what the real problem to be solved is, I agree. Ignoring that question and doing whatever comes to ones mind first doesn't really work so.


> Producing something to produce something

A central argument of me is that this situation "produce something to produce something" rarely happens - this is in my opinion rather evil propaganda from people who hate programmers and their way of thinking. Such code nearly always solves a problem - often one that the managers don't understand.

Just to be clear: it does happen that the code solves a problem in a bad way - this is where in my opinion the trope of "code for code's sake" comes from.


> this is where in my opinion the trope of "code for code's sake" comes from.

Some companies still use metrics like LoC to assess employee productivity. In such cases it might become a necessity to write code that's just there in order to look good on the next quarterly evaluation sheet.


It often IS your job to write code though, so saying it's not is false. The code has to be the right code however.

More generally, "you are solving business problems by applying software systems"

That's how I describe this to junior folks. The goal is to make them focus on the business need, then they can sub critical thinking for MBAs/managers/PMs and do the right thing.


I don't mind the way codingdave put it, though. Sometimes it isn't your job to write code. In fact sometimes it's the opposite, and your job is to talk to people and explain to them why code doesn't need to be written.

Saying, "It's not your job to write code," might seem technically false, but in terms of getting the message across, I think it's striking and it works well. And getting the message across is the goal of communication, moreso than being 100% accurate.


Agreed. Just like it's not a carpenters job to hammer nails. It can be the job sometimes but other times it's not needed to build something.

Just like solving the problem might be to remove code. Or to inform the user of tools that already exist but haven't been documented enough for the user to know about them.


Decent analogy.

A carpenter solves problems with wood. Sure, they communicate with customers, measure and draw, sometimes say "it can't be safely done"... But if you remove the part where they make a product out of wood, I don't think they are carpenter. They are.. a home solution specialist, something?

If my job was directing others to open source tools and removing dead code, I'm not a developer.


Agreed. The term software engineering/development implies much more than writing code.


That is the difference between real engineering (solving technical problems) and "engineering". Thanks for putting better than I ever managed!


If this were true then solutions which required no code or a line of bash would be greeted with open arms.

Having software to sell to VCs is valuable. So yes, sometimes you're payed to write code.


Most advice about programming is about writing code, but we rarely ever consciously consider how to improve at reading code. One way to do that is to read more code.

If you read really good code, and I mean really read it, like a book, and absorb it, then you will improve so much.

This kind of improvement is the most impactful, because you also spend most of your time as a programmer reading code than writing new code.

Also, if you become good at reading code, then you don't need documentation.


Personally, I find it to be even better to step through code using a debugger.

I think just reading down the text of a set of code files like a book has pretty limited utility. (Though better than not reading code at all!) It's like reading one of those choose-your-own-adventure books from start to end.

Starting from an entry point and then digging into methods from there is better, but being able to inspect the runtime state and get a sense for the data layout makes the experience so much richer.


> but being able to inspect the runtime state and get a sense for the data layout makes the experience so much richer.

That's kind of the problem and why it's a crutch. If you get better at reading code and reasoning about it without needing to use a debugger, it'll make you even more productive for when you actually do need to reach for it. The opposite is not true.

Being able to read code deeply in the setting of code review and point out structural issues without needing to execute that code or strap on a debugger is an enormous superpower, and one of the biggest skills that differentiates an early-career or mid-level engineer from a senior engineer in those that I've hired and managed. Over time, you do a lot more reading code than writing it if your codebase does its job successfully.


I don't think it's possible to put too fine a point on this because it seems to be a weirdly and stupefyingly common point of view among programmers:

This perspective makes absolutely no sense.

This is like saying that the exercises in mathematical textbooks are a crutch because you'll never be able to read formulas if you interact with the ones in the book via the exercises given. This is the exact opposite of the case! The exercises are there to force you to "step through" the technical details in the text, to actively build intuition for the dynamics. If there were a way to spin up a math debugger to literally step through the details as you work through the exercises, that would be amazing. Educators would kill for that capability! And we already have it, essentially for free. But then out of some kind of misplaced sense of purity, tons of people have concluded that it's bad to use this incredible capability.

This is just bad pedagogy.

Edit to add: I feel like I didn't state this plainly enough: The way to learn things from textbooks is not to just read a bunch of different text, it is to interact with the concepts covered by texts. That's why textbooks all have exercises, and why educational institutions always ask students to do those exercises (and more). The advice to "read a lot of code" is like saying "read a lot of textbooks". But that's not good enough. You need to run the code, ask questions of it, test assumptions out; this is just like doing exercises in a textbook, and a debugger is a superpower for doing that active interrogation, while reading the text.


I think it's common to react to an alternate approach with defensiveness and assume it doesn't make sense, because it calls into question one's sense of identity and strategic competency.

I never said to "read a lot of code" analogously to "read a lot of textbooks." I actually explicitly called out the practical context of code review, which is where you really learn and develop the muscle of code reading in a design critique setting.

You cannot get good at code review without getting very good at reading code and structurally reasoning about it. You get good at code review by doing it, and in particular doing it with other engineers who are very good at it and who can teach you how to do it. There are a lot of ways to do this that are essentially free (open source contribution is a great one); and of course, it's a mandatory part of what you'll need to do on the job as a professional engineer at any reputable shop.

Get good enough at giving and receiving quality code reviews and you'll find yourself reaching for your debugger less and less. That's because you'll be able to reason about code /in your head/. You can see how common and edge paths would evaluate without needing to execute it, because you've built the muscle to technically analyze and discuss it as it stands.

Go without building that muscle, and you'll lack the foundations that it builds for you, and you'll always rely on the crutch to make up for it. For what it's worth, this is what well designed software engineering interviews are designed to test for -- your ability to reason about and effectively work with code without being dependent on tools.

If you've never seen an experienced senior or principal engineer who uses zero tools solve problems 10x faster than you can because of their ability to do what I just mentioned, this is going to sound foreign. But if you ever develop the inclination to develop into such an engineer, you'll have to do what I just described.


You and I agree on this being an important "muscle" to build.

But this thread is not about people using this muscle, it's about people developing it afresh and further exercising it. What you described with senior engineers doing code reviews and such is analogous to a ballet dancer on stage performing, but I'm talking about doing a thousand pliés in a practice studio. The stage doesn't have a barre or a mirror, but the studio does, and it would be foolish not to use those useful tools when exercising.

I'm not totally sure if we're just talking about two different things - use vs. exercise - but a plain reading of your comments makes me think that you think the best way to improve one's structural reasoning (or "mental debugger" as another commenter put it nicely) skills is to just read more code. But I'm sorry, that's just not right. It is much better for building a good mental model of how code executes to simultaneously inspect and interrogate it as you're reading. It isn't necessary, it's just better.

A crutch is just not the right analogy. Rather, it's an aid that makes exercise more effective so that you're stronger when it comes time for game day.

Or, to come at this from another angle: Surely you agree that just reading code is not actually helpful for developing the reasoning ability you're talking about in your comment. Wouldn't you agree that novices need to run the code they read, to see what it does? Otherwise they are guessing , and as novices, they are almost certainly guessing wrong. Without anything to check their guesses against, they are likely to just assume they're right and internalize that likely-wrong mental model. I certainly had to painstakingly undo a lot of my incorrect mental model from bad guesses early on. The more that can be avoided, the better!

If you agree with that, which I suspect you do, then I'd put it to you that a debugger is just an efficient and effective tool to better aid in this process of guessing what code does and then checking those guesses, in order to further develop and internalize a correct mental model of how it works. It's like if you could run a unit test surgically exercising exactly the code path you're interested in, exactly up to your breakpoint, and assert on exactly the program state you're guessing about to confirm or deny your hypothesis. But you can do it interactively, without needing to write test code. It's an amazing capability and learning aid! It's very bad advice to tell people not to use it as they are learning.


> I'm not totally sure if we're just talking about two different things - use vs. exercise - but a plain reading of your comments makes me think that you think the best way to improve one's structural reasoning (or "mental debugger" as another commenter put it nicely) skills is to just read more code.

I've already said twice, both in the original comment and in the follow up, that my point is about applied code reading in the context of code review, not in a vacuum. The first time you decided to respond to a made up point which I never made, I clarified it. Now that you're doing it a second time, it's clear you're arguing in bad faith and putting words in my mouth.

I wonder why you keep on arguing against a strawman point that I never made rather than actually responding to my argument. Is it because you are incapable of understanding the point I'm making? Perhaps. Is it because you're deliberately ignoring it out of cognitive dissonance as you know it would weaken your argument, and you'd have to admit that you're arguing for something which is the wrong strategy? I hope not, but that's my best guess.

The rest of your comment is impractical. If a novice is struggling with understanding and working with a piece of code, do you really think the best way for novices to learn is to "figure it out" with a debugger, or do you think they should ask for help through code review or pair programming with someone else working with them on the project?

You seem to be arguing under the assumption that software engineering is a solo sport. I am arguing that software engineering is a team sport and those who treat it like a solo sport tend to underperform and plateau in their skill development.

I stand by what I said. Debuggers are a crutch. Crutches can be useful but you shouldn't rely on them. Learn how to reason about code in the abstract (which you learn to do by learning fundamentals of computer science and applying them in a team context with others who can) and you won't need it because you can compile the AST of the codebase in your head. Moreover, this skill is MOST critical to develop when you're early in your career because it's when you have the neuroelasticity to learn new ways of reasoning about and problem solving with code. I was forced to do this by early career engineering leader mentors and while painful and humbling, I am very thankful for it because it put in place the practical fundamentals for me to architect large, performant, scalable and revenue generating systems from scratch with very little issues.

If someone thinks that's impossible to learn whether late in their career or early in their career, that speaks more to their shortcomings than to an issue in the technique. For what it's worth, many of the strong early career engineers I've worked with (1-2 years experience max) have also been able to do this, and often times even better than many mid to late career engineers. As you may expect, these strong early career engineers end up running circles around their more experienced counterparts pretty quickly.


> I've already said twice, both in the original comment and in the follow up, that my point is about applied code reading in the context of code review, not in a vacuum.

Ok, but that's not what this thread was about before you replied. The OP literally starts with the words "advice for new software devs". So I assumed that, rather than just being off-topic, you must be using that as an analogy for what we were actually talking about.

The reason I keep coming back to the topic of the thread is that it is the topic of the thread. It honestly just took me awhile to figure out that you actually did just want to grind an axe on a totally different point.

> If a novice is struggling with understanding and working with a piece of code, do you really think the best way for novices to learn is to "figure it out" with a debugger

Um, yes, definitely. Or at least, it is one extremely valuable tool they should learn how to use effectively. Another very valuable genre of tool that has emerged very recently is things like chatgpt. And then there's good old fashioned reading of code, books, documentation, articles, etc.; those are useful, but the interactivity of a debugger or chat assistant is better.

> or do you think they should ask for help through code review or pair programming with someone else working with them on the project

For one thing, "novice" does not only mean "entry level employee", it also means students. But both students and people early in their career should be taking advantage of both high-touch 1:1 support like code reviews and pair programming, when possible, and also muddling through things on their own, guessing wrong and figuring out what the right answer is and why their guess was wrong. It is not possible when learning and deepening a new skill to have an expert available for individualized attention at all times. Individualized attention from an expert human probably is indeed the best way to learn and develop skills, but it doesn't scale.

> If someone thinks that's impossible to learn ...

Who said that? It is obviously possible to learn things while ideologically rejecting readily available tools that can help you learn more efficiently and effectively, it's just extremely foolish.

For instance, a tool that seems to have been readily available to you, which you seem to have used effectively, was the support of an experienced mentor. It would have been very foolish of you to reject that. But by your logic, using an experienced mentor to help you develop a new skill is a crutch that you must reject because you won't be able to run all your code reviews by a mentor later in your career. But of course you can see why that's stupid. But it's the same thing you're saying about debuggers. "Don't use a tool to learn because then you won't be able to function without it later". It's just not how learning works.

Any tool that helps a novice build a more accurate mental model of a subject is a benefit, not a crutch. Holding an ideology in opposition to such tools is foolishness.


> Individualized attention from an expert human probably is indeed the best way to learn and develop skills, but it doesn't scale. > Any tool that helps a novice build a more accurate mental model of a subject is a benefit, not a crutch. Holding an ideology in opposition to such tools is foolishness.

It looks like you really don't have a substantial argument or credible results to show for why your proposed approach is better, so I'll refute this one: no, any tool that helps a novice build a more accurate model is not necessarily a benefit. It's the same reason that you don't ride a bicycle with training wheels forever, and the same reason singers who can't sing without auto-tune atrophy in their development.

I'm glad you're at least conceding the value of individualized attention from an expert human being the best way to learn and develop skills, but I disagree with you that it doesn't scale. In fact, I'll refute it most strongly -- it's the only thing that /does/ scale.

If you are a novice engineer and you want to progress as fast as possible, start doing real code review as soon as you can. As I mentioned earlier, you don't need to get that from being gainfully employed -- you can and will pick up all of these skills in open source contributions. Start contributing and you'll get feedback. Work on the feedback and you'll start learning what code review is all about in practice. Learn what code review is all about in practice and you'll stop fumbling around in the dark with a debugger.

You only have so many hours in the day. Spend it on what really matters, and spend it on the most important muscles. Do your deadlifts and squats rather than ab crunches.


Hilariously, you and I seem to entirely agree about the point I have by far the strongest conviction on, which is that passively reading code is not the best way to level up skills.

You think people should be taking advantage of human support to learn by actively doing, and I think they should be taking advantage of both human support, if available, and also whatever computer tools are at their disposal independently.

The only reason I started arguing with you is that this prideful machismo ideology of disdain for useful tools that is common among programmers has become a giant pet peeve of mine, over the years since I held that foolish ideology myself. I'm not arguing with you, I'm arguing with myself a decade ago, when I could have avoided a lot of silliness.

> You only have so many hours in the day. Spend it on what really matters, and spend it on the most important muscles.

I also only have so many hours in the day, frankly not nearly enough as it is to do my work and take care of my family, and I don't want to spend any of them trying to figure out how to mentor people who send broken code for review without bothering to first figure out why it doesn't work how they expected and how to fix it. That is not a valuable use of anyone's time; there are plenty of tools available to off-load this to computers, without taking advantage of someone else's "only so many hours in the day".

What I am interested in mentoring people on in a code review or pairing session are the interesting non-mechanical facets of the craft. I love discussing that stuff. But please don't come to me with "I don't understand why this data structure holds this value at this point in the program"; you can figure that out yourself and learn something on your own in the process.


It looks like we are both arguing with ourselves from a decade ago, so at least we share that common ground :)

Candidly, I don't know how constructive this conversation is at this point; given that you've mentioned "I also only have so many hours in the day, frankly not nearly enough as it is to do my work and take care of my family, and I don't want to spend any of them trying to figure out how to mentor people who send broken code for review without bothering to first figure out why it doesn't work how they expected and how to fix it" I find it hard to believe that being one of the best builders in the world is really your goal or that coding is anything more than a vocation for you rather than a calling. And there's nothing wrong with that, but it's going to mean we will simply talk past each other having two completely different conversations.

However, my advice is for those who are not content to be good enough at writing software to maintain a stable job in tech until retirement, but for those who want to go above and beyond that, and stop at nothing to be the best of the best and reap the rewards, internal fulfillment, and sense of purpose that comes with that. Realistically that isn't and it shouldn't be most the goal of most early stage engineers in the field -- it's an expensive goal that even if attained only will make a very specific kind of person (like myself) fulfilled. Your advice is likely good for the former category, while mine would be poor.

For those in the latter category, I'll contextualize my advice based on the this very well written blog post by Dan Luu about the difference between p95 proficiency and p99 mastery. You can sum up my argument as stating that overreliance on debuggers (and other tools such as IDEs) is absolutely something that you see in p95s that you don't see in p99s. P99s can utilize debuggers, IDEs and other tools but it doesn't materially affect how quickly they can execute because of how well trained, fast and accurate their "mental math" flavored problem solving ability already is [1]. If you're curious, please take a look through the linked blog post and see how much you agree with it.

[1] https://danluu.com/p95-skill/


It's true that a debugger is easier to use, but it's limited in that you can't bring it all the places that you can bring your eyes. Also, as one improves at reading code a develops a mental debugger that can be brought anywhere your eyes can.


> it's limited in that you can't bring it all the places that you can bring your eyes.

Why can't you?

> Also, as one improves at reading code a develops a mental debugger that can be brought anywhere your eyes can.

This is definitely true, but that "mental debugger" is susceptible to incorrect assumptions about the runtime state at the point of execution. If this weren't the case, far fewer bugs would be written in the first place, as this mismatch about assumed vs. actual runtime state is where most bugs emerge.


> Why can't you?

Honestly not trying to be rude, but that would be because of how eyes work. The moment you do anything that goes beyond looking at code, is the moment you've met that limitation I described above.

The other part you brought is totally valid, it's just all about tradeoffs. If you prefer using a debugger in situations where I would read the code, then more power to you.

Actually, it's funny, because the original article talked about the One True Way (or whatever his exact wording was), and how we all go through a phase where we discover that for ourselves. I think this thread in a way has been an expression of that concept. Anyways, thanks for your thoughts, I'll remember this next time I'm struggling with the ol' brain debugger ;-)


What do you mean by "because of how eyes work"? When using a debugger, one's eyes mostly remain on the code... I feel like I must just be truly missing what you're trying to say here.

I like your call-out of the "one true way" section. What I was thinking in that section is that my experience, having now traversed a number of "one true ways", is that: 1. Very nearly all tools and techniques are truly useful in some way in some circumstance, and 2. Not all "one true ways" are created equal; some tools and techniques are more useful than others and better stand the test of time.

In my view, using a debugger to understand the runtime state while reading and exercising code (especially when it is unfamiliar) is one of those techniques that is broadly useful and never out of date. But that's not a totalizing view, it isn't the "one true way", obviously nobody only reads code while running it within a debugger, nor should they! But dismissing its utility for this use case of understanding code more deeply is, in my view, just odd.


Where to find really good code? Open source projects that have stood the test of time?


Usually, the best code to read is something that you already need to understand. So, if there's a library you use at work, for example.

Also, for every language there's a really important project written in that language, like SQLite if you want to learn C, or finagle if you want to learn Scala.


Despite reading essays of other devs telling you how you should dev, have faith in yourself. Over time you'll develop your own proclivities, instincts, etc., and end up writing your own essays to the next generation.

People writing essays are not writing it from some impossible point of knowledge or privilege. They're just you in the future.


Gusto is the attribute of most successful programmers I know. The gusto to always improve oneself, the gusto to follow through problems where the last 10% is actually 90% of the problem, the gusto to push through ambiguity, the gusto to follow things through on the timespan of years.

You don’t need formal training, or “experts” writing blog posts, or the latest fad technology. Gusto and gusto alone can make you successful.


Yes most self taught programmers actually have instinctively a good way of programming if they learn it through hacking.

It's a direct and straightforward way of programming not ruined by object oriented thinking or any of the solid principles bullcrap.

What does the computer need to do? That is a powerful mindset to get into and building on.


This is true for people who have worked on big projects or had to maintain working software over time.

However, I dare you to look at the kinds of code that "Jupyter notebook only needs to run once"-scientists write... Many of them have spent a lot of time hacking and can hack together something that runs and works on one input incredibly quickly. In a business, however, that's almost never what you want.


I still think it's better than this scientist reading a bunch of books about software design patterns and trying to create something they don't really understand.

Maybe like a "bad hacked-together first instinct" is easier to correct and build on than "misusing or mangling an advanced dev paradigm that you're not even good to enough to judge whether is appropriate for this usecase."


But also, learn from history. Read books. Luckily, we don’t just have blog-post essays to go on.


> People don't listen to me because I'm a good programmer, they listen to me because I'm a good writer. The same is true of pretty much everybody you'll read.

This is so important and so general. Even writers who make a lot of their real world experience like Nassim Taleb are still read because they are good writers. No one reads Jeff Bezos' letters to his shareholders, because he's not a good writer.


Code reviews are so impactful. Learning how to have others look at your code and provide feedback, and you looking at others and provide feedback is huge. Mixing in the junior devs into the mix, where they feel safe to ask what is going on here or why, is the perfect comment canary. Taking the journey from 'ugly, but acceptable' to polished only comes from feedback and looking at other's code.


HN hug of death. Here's the Wayback link. https://web.archive.org/web/20231220091455/https://buttondow...


> 8. Take walks.

I'm fortunate enough to have an office overlooking a small pond. When I'm frustrated I go to my window and stare at the pond for a while. When I'm really frustrated I walk down to the pond and stare at it for a while.

I also walk to lunch as often as I can (i.e. when it's not freezing or boiling outside). Highly recommend.


Gotta throw like... two sticks in the pond... or a couple rocks... that will help.


When you’re really REALLY frustrated, do you go into the pond and stare at it from the inside?


I can't say the thought hasn't occurred to me ;)


Walks have helped me a lot in all aspects of life while dealing with problematic situations (as a refugee in Belgium). For the last couple of years, I have devoted my free time to a creative project called awalkaday.art.


> From a person who really shouldn't be giving others advice.

Ok, now I’m listening.

Also, #8 is the best long term advice for programmers. Walk.


Before anything else, learn how to solve problems on your own. I know, we have this huge thing called the internet where everyone else has already solved every problem, but you really need to develop your own problem solving skills and looking up how other people did stuff won't help you do that.


A beautiful piece of code will always be outshined by something that works reliably and well.


When I was younger, I was obsessed with writing clever code to show off my programming knowledge. Over time I realized code that is easy to understand is much more valuable.


Same here. I've had really simple pieces of code in production for a decade with no issue.


> People don't listen to me because I'm a good programmer, they listen to me because I'm a good writer.

That goes for all those talking heads on youtube too btw. They're not necessarily right, but they're good speakers/actors.


Lip service is an important skill when dealing with toxic individuals.

If it's unimportant to the project, agree and move on. E.g. "your code is badly written, but we've a tight deadline so I'm letting it pass the review".

If it was a real issue it wouldn't pass, this is usually just stylistic criticism (unless of course you've ignored good practices like SOLID etc ;) ) and indicative of a new senior who still can't differentiate between functional and problematic code issues.


This is all fun and games until that person leaves the team or goes on vacation and some other project's tight deadline unexpectedly depends on understanding and fixing their poorly written code. People really discount the value of maintainable and understandable code until they've been in this position multiple times. Some people never learn it because they "fix" or modify things without understanding them, cause problems, and never reflect on how that happened.


The wink emoji is leaving me with the sensation I am out in the dark on something everyone else knows.


I am not sure what the wink is, I think SOLID is mostly solid but my advice is be wary of anyone who is a zealot follower of Uncle Bob :)


Uncle Bob devotees that I have worked with, have also written some of the most confusing, inscrutable code I have ever seen.

Dependency Inversion I think is a poor idea though. It is fine if you must have many different versions. But I often thing one solid concrete implementation is better. where this really goes wrong is when people are so into Dependency Inversion that there is an IClass for every Class, doubling the amount of files, and <5% of these actually have more than one implementation.

IMO I think going by the original idea of Object Orientation from Alan Kay is the real winner: Its all about message passing.


My two cents as a professional engineer coming from biochem:

While computer science / programming is very young and light on the science side, it has solid foundations in mathematics, and the mathematics that are true today will always be true.

So I argue (without providing any proof) that it is almost certainly a good idea to “follow the maths” in your learning journey. I think this is as true for numerical algorithms as it is for functional programming and type theory.


Yeah, I'm with you very much. There's a very clear distinction between "the foundations" of programming/software development, domain-specific expertise, and language/platform expertise. The foundational pieces are pretty much timeless; the platform pieces are ephemeral. Getting good at the foundational pieces ("the maths" as you call it) is a total game changer for being able to sit down with some new/unknown library or codebase and get to a level of understanding quickly.


You had me at numerical algorithms but I'm not sure I agree about functional programming and type theory. I used to like functional programming and type theory when I was younger and found it completely useless as I moved further along in my career.

I'll make a different suggestion: set theory (which builds intuition around relational data modeling) and distributed systems (which builds intuition around building scalable services and software architecture) are by far the most important theoretical foundations I've had to actually apply at scale. That's followed by data structures & algorithms and discrete mathematics (for the rare times I really do need to write some optimized inner loops).


I don't want to be dismissive of set theory, because it has an extremely rich History and is still seen by most as the foundation of maths, but I believe there are some fundamental issues with it that make type theory a more appropriate foundational choice for general computation and programming language theory/design. There's a reason theorem assistants and proof languages are based on some version of type theory, rather than set theory.

To a first approximation, type theory and set theory play similar roles. There has been a lot of noise in recent years around replacing set theory with type theory as the foundation of maths, at least in the context of proof assistants and programming language theory.

But nonetheless, I agree that knowledge of set theory can't hurt (and practically speaking, you can't go far in your maths journey without it).

(I would also note that functional programming is a special case of relational programming.)


But when is the average engineer going to need to deeply understand functional programming vs needing to deeply understand SQL? The latter governs how your data is stored and mutated at rest while the former, while an influential flavor of programming, is ultimately not the model that won out in the market (and for good reasons IMO).

Most of the volume of useful work to be done in programming is about state manipulation, which is mostly relational database operations, which is mostly about set theory.

I think every engineer should learn foundations that best equip them for the market long term, and prioritize their order of attack along those investments which will earn them outsized value. Data structures and algorithms, set theory and distributed systems equip an engineer extremely well for that.

I have a very hard time believing deep investments in functional programming and type theory would differentially pay off better.

Now, if one's goal is not to optimize for the market, then I think the answer is a lot more broad.


Arguably, biochem also has "solid foundations" in physics. Would you recommend someone in your field to "follow the physics" in their journey?


Yes. Also follow the chemistry, and follow the maths.

Any answer other than "yes" falls squarely in the anti-intellectual category... But of course all the usual pragmatism applies unless you are a point particle having indefinite lifespan.


> the usual pragmatism applies

That sounds like by "follow" you don't mean "study a lot"... What do you mean then?


I think you're putting words in my mouth. I meant everything I wrote literally and there was no subtext between the lines.


Excellent advise. What works great for you or your team won’t work well for somebody else. So be humble about declaring your way to be “The Way”.

For example, I prefer statically typed languages. It just works well with how my brain works. I am super productive. However I also understand that dynamic languages works better for some developers. That’s great! I will naturally seek out jobs and teams that prefer statically typed languages and they will seek out jobs and teams that prefer dynamic languages. No problem.


That debugging book is addressing something I've been thinking about recently: how does one help programmers advance their skills from a fresh entry point? I am convinced part of that is understanding how to use tools to inspect and solve problems. I'd probably buy that book based on this recommendation alone, but it is also something I was hoping to find.

I believe other key parts of helping people advance are around teaching them how to read code and how to recognize/reduce unnecessary complexity, I don't really have solutions for that as of yet. There is plenty of text on DRY, SOLID, Law of Demeter, and so on, but I don't know how to move that knowledge from academic things one knows into tools for practical use.


> Julia Evans once said "behind every best practice is a horror story." If you don't understand a Best Practice, look for the horror story that inspired it. It might make the best practice make sense. It might turn out to be something that's completely irrelevant to you, and then you can feel comfortable doing a different practice instead.

This is how I think of Chesterton's Fence. A lot of people read it as saying "don't tear something out until you know why it's there", which is, I think, half of the point. But the other half of the message of Chesterton's Fence is that once you can explain to him why the fence is there, you're entitled to tear it out if you still think it's a good idea. The point isn't to avoid changing things, it's to always understand the reasoning behind the status quo before changing it.


Agree with many of the points raised in the link.

Mine are simple -

Leave your ego at the door.. especially if you are a new/young talented or "prodigy" programmer.

Take any criticism on the chin. Be open minded and learn from it. Chances are they are not being negative towards you or your coding solution. In this indistry, people are going to be direct with their choice of words.

Be honest. If you dont understand, ask for clarification. If you are writing code or using some library you have not used before - let it be known. You are not saying "you cannot do it" - imply you are excited to challenge and learn. Most young devs are generally like this.

Co-workers will come in all shapes and sizes, and range from social to anto-social, alongside introverts or somewhere on the spectrum. I always try to meet-in-the-middle with everyone in this field.

Feel free to raise concerns or ideas but at the end of the day respect the decisions being made even if you do not agree with it. People above you like seniors, leads, etc, have experience and may have got through past projects and know things. Of course, if you are "correct" in many ways, I am sure you will be recgonised (eventually) as you gain experience yourself.

All this said and done --- Always be patient and dont worry if you are not "making an impact" in your department. Good things come with age.


"what advice would you give to xxx"

Other than a means to gain advice, I've found this question a good gauge for collaboration compatibility.

I often ask it in interviews on both sides to give me insight into what someone currently values.

It's sort of like asking someone to define "better" be that in skill, or happiness or avoidance of pain.

My main interpretation of the post would be a high value placed on pragmatism, gained via a journey of experimentation. Put crudely there is no silver bullet but try a few for a while

I think point 10 is highly underrated in my opinion


I used to keep a list like this on Quora: https://www.quora.com/What-are-the-best-secrets-of-great-pro... I used to update it a few times per year. Haven't done that recently. Maybe useful for someone else don't know.


If you think that you've potentially fucked up big, seek help and own up immediately. Phone the person if they are not responding to messages.

Everyone expects juniors to make mistakes, you'll earn respect if people know they don't have to worry about you delaying or trying to hide issues.

Also, ask questions! Again everyone expects juniors to lack knowledge, and yet something about our schooling makes us embarrassed to ask.


> If you think that you've potentially fucked up big, seek help and own up immediately.

Also, if you think you might fuck up big, partner with someone else first.

"If you're going to go down, bring people with you."


"Advice for new software devs who've read all those other advice essays" Ha! That reads like someone is trying to lure new developers into recursive essays! Be aware! :-)


These are really good tips, sober and undogmatic.


As a 50+ I have to say this. If you take away anything from that list - make it #8. It will save you so much problems later in life. (TL;DR: Take walks)

To follow advice #6: If you need the horror story to understand, look up "Thrombosis".


Don't stop learning. Be curious. Experiment. Exchange with others.


Advice considered harmful.


My best bit of advice for any programmer at any level: "Don't make stuff more complicated than it has to be!"

Software is complicated. Large, feature rich software is even more complicated. That's hard enough to manage as it is. The last thing you want to do is to throw a million of abstraction layers, frameworks, libraries, precompilers, transpilers, build steps, validation hooks, style checkers etc. into the mix. Each of them makes your project more complex by a certain factor. And not only do they add up - they multiply!

Now, that doesn't mean that you need to built everything bare bones without the help of any third party software - just make sensible choices. Unfortunately, developers are way too prone to add another shiny thing to the mix.

So here's how you decide if you really need something:

At first: You don't - continue as is.

Then: If the problem persists and the suggested solution keeps coming up, still refuse, but investigate the solution.

At last: If the problem persists, the solution seems well suited to address it and it keeps coming up - accept that you have the problem and adopt the solution.


This is so hard in practice.

I just had a junior dev rewrite some of my code so that: a builder calls a constructor which instantiates a builder factory which builds a builder then that second builder creates the object.

This whole system only builds one type of object. He thinks that his solution is better because it’s more extensible.

I can’t make him see why it’s bad.


> This whole system only builds one type of object. He thinks that his solution is better because it’s more extensible.

> I can’t make him see why it’s bad.

Schools teach OOP as though adding new types of objects is the norm—like every type of software construct is actually a GUI widget in disguise and we're going to be adding new interoperable subclasses every other week, so we may as well get the infrastructure set up to make that easy.

In most real world applications, the only type of object that behaves that way is, well, GUI widgets. Nearly every other type of construct in a typical system will have at most one implementation at a time (possibly two during a gradual transition). Factories, builders, and the whole design pattern menagerie aren't particularly useful when for the bulk of a system's life they're all just proxies to a single constructor.

I don't know if there's a good way to teach this out of someone besides just letting them experience it, but that's the insight that he needs—different types of code have different maintenance characteristics, and the tools he's been given were developed for a very specific type of code and don't apply here.


At least where I went to school, the teaching staff in university CS departments seem to be overrun by guys who haven't actually worked in the software industry since the 90s when OOP was the hot new thing. My theory is they were there when they got to build a bunch of new systems with OOP, but didn't have to stick around for the nightmare of trying to maintain those codebases.


Where I live (Tenth largest city in Spain, so not that big) I'm pretty sure I can track misconceptions about OOP to whoever taught that class in the only university that had a CS degree around here some 20 years ago. I've seen like half a dozen teachers saying the same stuff explained in the same way and I'm sure none of them understand OOP at all. Only one was able to explain why getters and setters ought to be used besides saying "it's the standard" and to provide an actual, reasonable use case for it rather than the tired examples of "Yeah OOP is cool because since both cats and dogs eat and have 4 legs you can keep a lot of the code together rather than duplicate it".

Software development is reliant on self-learning, but still, some education can be outright damaging.


Entities in game systems tend to behave like that too when the whole thing is under development - but then (a) arguably a monster chasing you around is just a special GUI widget with extra behaviour and hit points; and (b) when things get really complex it doesn't hurt to switch from OOP to ECS for games.


I don't have a lot of knowledge on ECS, are there any good articles out there that compare it to OOP?


I recommend reading this book on Data Oriented Design[0] since the underlying reason ECS has been widely adopted is because, written correctly, an ECS system can take full advantage of modern hardware. Most ECS vs OOP blogs come at it from a perspective that I personally find flawed. OOP has the tendency to have objects layed out poorly in memory which prevent programmers from utilizing SIMD instructions[1].

[0] https://www.dataorienteddesign.com/dodbook/

[1] For instance, objects in OOP are typically bundled with other data related to the object. As an example, in OOP a Door object could have float X,Y,Z coordinates, an enum for type of door it is, a bool for opened or close etc. This leads to inefficiencies in cpu cache usage when you are iterating through a list of doors and not using every field of the door object.

See this GDC talk for why this matters in the gaming world: https://www.gdcvault.com/play/1022248/SIMD-at-Insomniac-Game...


This tutorial: http://web.archive.org/web/20120314005352/https://www.richar... although now defunct, makes a good start. Around halfway through there's a heading "Abandoning good object-oriented practice" which gets to the point you're asking about.


Anything that argues composition versus inheritance.


We came up with a rule “no single use abstractions” so any abstraction with a single caller gets absorbed by its caller. We eventually had to say except across MVC boundaries (the project is MVC organized) or else we got some weird code organizations (controllers consume everything). It works pretty well though, in some cases we have some long methods, but really killed the premature abstraction “this will be useful in the future” discussions that eat a lot of time and energy. I think having a simple rule for everyone makes it easier to follow.


Every abstraction for extensible code implicitly assumes certain kinds of extensions and makes other kinds harder. Make him extend it in one of the harder directions :D


Good idea


Premature generalization is the root of all evil nowadays.


Experience will make him see this.

Experience did the same for me.

It just takes some time.


Not if he does the type of job hopping typical in silicon valley. There are plenty of programmers who only stay at a job ~2 years, and never have to maintain a program over a long period. You could imagine such a person saying 'this worked well at my last job' and just keep introducing that kind of pattern to new companies, eventually with '20 years of experience'.


Of course really it is 6 months of experience repeated 40 times.


Agreed. It's "fun" to start breaking your code down into small bits; more interchangeable "pieces" -- until you have to maintain it for 10 years (or worse yet, explain it to a new junior dev) and the extensibility you envisioned never came to fruition and now it's just a liability spread across 8 source files.


It's hard in practice because the industry values delivering something over everything else. I don't know how many times I've seen decent architectures turn to complete crap because leaders felt obligated to put their stamp on it.


Make him maintain one that someone else wrote.


Or... get them to maintain their own code 14 years in the future. Hard to do in practice, but once you've done it, you get a new perspective.

I got a call from a guy saying "hey, this broke". I had not touched this system in 13 years. I had to re-remember a bunch of stuff. Some of the code was still a joy to work with. The parts that were hard were... obvious, and even then, I remember cutting corners, not documenting stuff, and overcomplicating things "just in case" (which mostly never happened).

There's little amount of someone 'telling' me ahead of time how much difficulty I was leaving for the next person (or even myself). It really seems like the only way to get this experience is with time. That doesn't mean you can't follow some best practices that turn out to be beneficial. But until you've experienced the downsides, you won't really be able to internalize the why regarding something being good or bad.


That or make him try to maintain his own code a year later when it may as well have been written by someone else. This is really the only way to properly understand the problem with making all those abstractions. It all makes sense when you have the whole structure in your head but it's very hard to build that mental model back up later.


Your solution was also extensible, this is proven by the junior dev being able to easily refactor it into an overcomplicated mess.


There’s another heuristic that he broke.

If you can easily read and understand some code, then it’s good code and you should leave it alone.


What I do is make them test everything their code might do. "Oh you built a generic builder builder builder huh? I need at least 80% coverage for anything the builders might build." The code shrinkage you can achieve is remarkable.


Show him this discussion


  Debugging is twice as hard as 
  writing the code in the first place. 
  Therefore, if you write the code
  as cleverly as possible, 
  you are, by definition, 
  not smart enough to debug it.
     -- Brian Kernighan


Yes, but (not trying to in any way discredit half of K&R here) it can be useful to write clever code – ideally for personal projects – to both see what the language is capable of, and to learn first-hand why this quote is evergreen.


Clever code should be wrapped up in a parser and you should approach the process problem from a meta perspective, such as Mr. Kernighan did with his help in writing AWK.

Recursive descent, flex/bison, parser combinators… these are the tools to manage complexity. This is why lispers go so nuts once it fully dawns on them the power of manipulating the AST along with any other data structures in the program… it’s just a shame about all those parens!

We’re not writing machine code, most are not writing assembly or even C or Rust. Most hit the memory managed languages and it seems our abstraction has stalled, with endless language features applied to our current layer of abstraction. Its like goto programming before better abstractions were discovered.


Not an intelligent quotation. Hardness can be defined as how long it takes to accomplish a task. Following this definition everything can be debugged. It just takes twice as long.

This definition isn't absolute and neither is the definition by Brian. The truth is much more complicated.


Your premise is false, time to completion and difficulty are not equivalent.

Believing such implies it’s more difficult to ride in an aircraft craft around the world than legitimately beat Magnus Carlsen in a rapid chess game.

The reverse is frequently the case where running a marathon faster is more difficult than doing so slower.


It's as "false" as his premise of relating difficulty to intelligence. My point is the existence of an alternative statement that is equally and fuzzily true shows that this quote is not really that intelligent.

Reread my post. I literally said neither definition is absolutely true.


I understood what your post was saying, but they they aren’t fuzzy equivalents. The basic premise is just false.

Further the quote wasn’t suggesting equivalency. Rather intelligence as one bound on debugging, which is clearly true as can’t get a flatworm to do it. When trying to debug really clever code the easiest solution can be giving up and starting from scratch.


Wrong. A more complex program has more possible origins for the bug. So you need to make more hypothesises to check and verify the bug. Time and intelligence are both a factor here. Sometimes one more than the other.

Clearly you don't think bugs are all solved in seconds and limited only based off of intelligence. A harder bug often needs more time to solve. This is common sense.

You're just sinking with the ship now.


> A more complex program has more possible origins for the bug. So you need to make more hypothesises to check and verify the bug.

This isn’t a discussion about all bugs but the class of bugs created from dealing with clever code. Very difficult bugs may be fairly quick to solve in comparison to simple bugs that require some long process to replicate. Time to solve really doesn’t map well to difficulty.

>Clearly you don't think bugs are all solved in seconds and limited only based off of intelligence.

False, you clearly missed me stating it intelligence was “one” limitation not the only limitation. Poor tooling can be a massive pain among many other things. Again though this is talking about debugging a very specific kind of unnecessarily complex code.


>This isn’t a discussion about all bugs but the class of bugs created from dealing with clever code. Very difficult bugs may be fairly quick to solve in comparison to simple bugs that require some long process to replicate. Time to solve really doesn’t map well to difficulty.

It does. A bug solved in seconds is usually considered less difficult than one solved in weeks. It maps easily.

>False, you clearly missed me stating it intelligence was “one” limitation not the only limitation. Poor tooling can be a massive pain among many other things. Again though this is talking about debugging a very specific kind of unnecessarily complex code.

The quote was suggesting absolutism one limitation. By showing the existence of an equivalency I've shown the quote is not absolute. Therefore the quote is not intelligent. Therefore your statement is false and nonsensical.


> It does. A bug solved in seconds is usually considered less difficult than one solved in weeks. It maps easily.

Luck wildly impacts how long ‘difficult’ problems take to solve. I’ve solved bugs in seconds someone literally spent weeks and asked multiple people to help them solve and I’ve had the same thing happen to me.

Thus actual solve time and absolute difficulty are almost orthogonal.

> The quote was suggesting absolutism one limitation. By showing the existence of an equivalency I've shown the quote is not absolute. Therefore the quote is not intelligent. Therefore your statement is false and nonsensical.

Again no, it was saying one limitation becomes significant in a specific situation. Often these bugs may not actually take long to fix, but you’ll suffer when dealing with them. Even strait forward off by one errors can be annoying when you have to reason about really tricky bits of code.

That feeling where you spend an hour staring at an IDE with absolutely no clue what’s going on sucks even if it doesn’t take that long to actually fix the issue.


>Thus actual solve time and absolute difficulty are almost orthogonal.

No it just means luck is another factor. You have luck, intelligence and length of time. Time is correlated with possibility right? You can get lucky and guess the probability on the get go.

Either way you introduced a third possibility here which goes further to illustrate that this quotation is inaccurate and not intelligent.

>Again no, it was saying one limitation becomes significant in a specific situation. Often these bugs may not actually take long to fix, but you’ll suffer when dealing with them. Even strait forward off by one errors can be annoying when you have to reason about really tricky bits of code.

False. You are absolutely wrong. The statement was made without qualification to a specific situation. Therefore it is made in the context of the universal situation meaning absolutist. Sinking with the ship again.

>That feeling where you spend an hour staring at an IDE with absolutely no clue what’s going on sucks even if it doesn’t take that long to actually fix the issue.

So? This doesn't have anything to do with the topic at hand. The topic at hand is the quotation is wrong. How you feel during debugging is off topic.


> The statement was made without qualification to a specific situation.

The overall stamens includes an “if you” which is a qualifier. So you seemingly don’t understand what those words mean and objectively wrong here.


"If you" doesn't mean what you're implying it to mean, this is a deliberate twisting of the meaning by you. If you write code as cleverly as possible you can still solve a bug with a lot of time, with luck and/or with intelligence.

He is saying if you write code as cleverly as possible then in ALL situations it will be impossible to debug by you, which is false. The quote is not intelligent.

You know all of this you're just sinking with the ship and trying to manipulate the situation your way while throwing insults at me on my ability to understand language. Insults are a signature move by someone who has clearly lost the discussion and you've lost definitively.

I think we're both done here. After that insult there is no further need to continue the conversation. Please leave.


Nope, nobody writes every single line of code in a program as cleverly as possible attempting to do so generally means you don’t finish.

And no it’s not saying in ALL situations you can’t solve ANY bugs. You only finish debugging when you solve every bug not just 1 of them. Further sometimes you’re going to write a clever bit of code that doesn’t contain a bug and thus doesn’t need to be debugged.

So it’s saying writing buggy code is easier than correct code, so avoid writing clever code or some of it is going to stay buggy. That’s what the quote actually means.

PS: Also, that wasn’t an insult it’s a statement of fact. You’re trying to twist the statement as not being qualified but it’s got a qualifier.


>PS: Also, that wasn’t an insult it’s a statement of fact. You’re trying to twist the statement as not being qualified but it’s got a qualifier.

You're a liar. I clearly am typing english and reading your responses. You're typing to me in english so you know I understand it. What you said was therefore with 100% intent malice because it's simply not true. You are not a moral person. You're just an asshole and you know it. THIS statement can be claimed to be a fact, not yours, not even in the slightest.

I mean you know what happens when you call a stupid person "stupid" to his face and then say it's a fact? It's not an insult? Just a fact? You know this. No need to spell it out. You're an ass hole.

>Nope, nobody writes every single line of code in a program as cleverly as possible attempting to do so generally means you don’t finish.

You're just making stuff up at this point. A quotation made around a situation that can never occur according to you? You're just lying now.

>And no it’s not saying in ALL situations you can’t solve ANY bugs. You only finish debugging when you solve every bug not just 1 of them. Further sometimes you’re going to write a clever bit of code that doesn’t contain a bug and thus doesn’t need to be debugged.

No you're just adjusting the meaning to fit your agenda. You're going at a bit of of a stretch here. This conversation has descended into way to deep of pedantism thanks to your attempt to twist things in your favor.

>So it’s saying writing buggy code is easier than correct code, so avoid writing clever code or some of it is going to stay buggy. That’s what the quote actually means.

No. It's simplistic to say that clever code tends to be buggier. This guy took the extra step to say that clever code is "by definition" not debuggable. Again twisting the situation to fit your agenda.

Man you're done. You need to stop with the insults and stop with these pathetic attempts at explaining your point of view.


Sometimes the truth hurts, get over it.

Ignorance doesn’t mean stupidity it means you don’t understand something. It’s possible to gain understanding when you accept you’re wrong and try and learn, but lashing out means you will forever wallow in ignorance.

> generally means

“can never occur” this this is why you don’t understand. You need to actually read what was written not whatever nonsense comes into your head.

You only need to debug code that’s not correct. Therefore logically when talking about the effort to write code vs debug that code they don’t mean the effort to write correct code vs debut correct code. Instead it’s the effort to create incorrect code vs debut that incorrect code. Any other interpretation is nonsense.


True. On the other hand, don't try to make things simpler than they are.

https://en.wikipedia.org/wiki/Waterbed_theory


True. On the other hand, the reason things are not simple is probably that your requirements are dumb.

https://youtu.be/hhuaVsOAMFc?t=12


>The reason "simplify" is not the first step, is that it's possibly the most common mistake of a smart engineer to optimize something that should not exist.


Seeing that that's attributed to Apocalypse 5, and Apocalypse 5 was >20 years ago makes me feel ... something not good.


I believe we have another name for this theory: leaky abstraction.


I think an automatic code formatted actually makes one’s job easier, not more complicated.


Absolutely! I'm grateful that I don't have to worry about code formatting any more. But I remember in one of my earliest job the company used style checkers as a pre-commit hook that rejected your commit if they found trailing whitespace. That was before code formatting was part of your IDE. (Especially for us front end devs who used Notepad++ rather than an IDE at the time).

And notepad++ had no easy way of showing trailing whitespace. So every commit was a dance of commit -> read rejection log -> remove trailing whitespace -> commit again.


I once worked on a project that would not even compile if a function had a param without a comment explaining the purpose of that param written with a specific format. Every comment was validated on compile time, you couldn't even comment code just to test something. Life was hell.


I have memories at my post grad school where any deviations from the expected code source formatting led to a -1 on the mark. On the first few projects it was not rare to see students getting marks well into two or three digits negatives.


Things that don't matter much. A nice consistent style is good to have, but it isn't something worth worrying about that much.


What's stopping you from configuring your editor to automatically strip trailing whitespace on save?


Code reviewers that made me type in trailing whitespace that editor stripped: "this change is on lines you are not modifying". Still makes my blood boil many years later.


Mostly the fact that this was 15 years ago :-)


That completely removes a whole slew of useless comments when people are reviewing code, it's such an amazing win, every single language should have it's styleguide published and a tool that forces it without any options.


The key to style guides is to not have so many rules. Agree on the number of spacing and where to put braces. Past that, it’s obsessing on form over function, which you cannot control simply by virtue of having many people work together on the same codebase.

You should worry more about how things are named, the number of abstractions used, and whether the code has any comments explaining the writer’s intent.


This is broadly what reasonable people believe, but there are crazy people who WILL obsess over form and nitpick your whitespaces or other trivial bullshit if given a chance. An authoritative style guide shuts down many such detours.


It's not even just about obsessive nitpicking. There are people who, for example, prefer code with spaces around the parens for conditions and people who don't. If there's no enforced style guide and one from each camp end up touching the same code, you're likely to get a bunch of pointless noise in the diffs. Even if no one is going to hold up a diff arguing over it, it makes it harder to review changes.


I think you can have as many rules as you want as long as there's something like black or rustfmt, just a script you run that auto-formats your code. You never have to worry about the rules because there is no configuration thus nothing to argue about. Sometimes your code gets formatted weird but who cares, just do your work.


Yes, but, on the other hand having a detailed style guide does make for short and simple debate during code review.


This often leads to extremely annoying codebase because languages trying to enforce styleguides without proper options just leads to inconsistency once any code in another language leads to the codebase.

Just have an options file which is checked in with the code and enforce whatever is set in there works much better. You still avoids all the useless discussions about formatting while also allowing to set sensible settings which are consistent with surrounding technology.


Not sure I understand this. Do you mean tools like "go fmt" are going to try to format your Java code?


This is really simple:

* All our code must be linted / formatted

* All their code must be ignored.


> The last thing you want to do is to throw a million of abstraction layers, frameworks, libraries, precompilers, transpilers, build steps, validation hooks, style checkers etc. into the mix. Each of them makes your project more complex by a certain factor. And not only do they add up - they multiply!

Know plenty of people who call this job security


It's worse if your team uses a house-built framework that's too complex for its own good. There's a whole team devoted to the care and feeding of a framework that could be replaced with source code templates or just a clear architecture document with a "cookbook" style set of accompanying documents. God forbid you try to use something else; you'll get inundated with "But that's the company standard!" style job protection complaints.


I think this is explained by the fact that most business logic is dry, and most abstractions are interesting (until you get bored of them). So you wrap your actual code in fun code to make the job bearable.


> I think this is explained by the fact that most business logic is dry

I don't think that business logic is dry per se. The problem in my opinion rather is that in other parts of the software project, there is much more openness with respect to

- trying out new things in new ways

- making the code more elegant

- seeking abstractions

- ...

than in the business logic area.

Believe me: for the kind of business logic that I see at work, I could immediately see ways in which the (non-trivial) business logic could be made much more elegant by using clever mathematical ideas, but suggesting such ideas to other colleagues or the bosses is like talking to a brick wall.


It seems to me that if you can see elegant simplifications (or really any significant improvements) but are unable to implement them, you are either positioned too low in the hierarchy, or in the wrong organization.


This is the likely cause - most software development is dreadfully boring. But at some point in a developer's career, they will have touched the monolith and everything is full of stars, and instead of saying "I'll just hook you up with Magento", they're like "I will build you an event-driven microservices architecture in a custom made C# framework" and disappear for six months while they work late nights.

(I wish I made it up. This was what a CTO at a previous project did. 30-odd people were waiting and churning on while he was unavailable because he had to indulge his own things. And once they were beyond the point of no return, both he and the manager that greenlit this project quit, but stayed on as independent contractors. I believe they were demoted or taken off the project and eventually gotten rid of when a new manager was found)


This is a happier ending, could have been "he finished the project, it turned out to be bad but now everyone has to use it".


New life goal: become CTO.


I'll upgrade my original comment: I think the business code is the present, and the abstraction is the wrapping. It looks pretty on the surface, and you have to do a bit of unwrapping to see what's actually there!


Good advice; it improves maintainability, which becomes even more important the larger a codebase becomes. https://grugbrain.dev/ talks a lot about avoiding complexity, and does so in a pretty entertaining writing style too.


Don't solve simple problems with complicated solutions


Known as the KISS principle - Keep It Simple Stupid! Taught to me at university 20 years ago and still valid today.


I think good software like good algorithms is made from simple patterns that do complex things.


Great advice! I wish more people had that insight nowadays.


Read the documentation. Don't skip over it to the part you want; read the whole thing, cover to cover. It takes more time the first time, but it saves you time for the rest of your life. You will look like a genius because you'll have an encyclopedic knowledge of everything. You will avoid problems early that come from the subtle knowledge every piece of tech has, buried deep in the docs. You will learn to solve your own problems faster rather than asking other people about something, because the docs often have the answer.

Read the source code of popular software you use (including the libraries of your favorite language). You can learn esoteric knowledge about programming just by looking at other people's code. One benefit of OSS is that many eyeballs can shave code down to very efficient forms, and you can take those forms for your code.


There are some brains that do very well with this advice, but mine is not one of them.

I only seem to learn by moving back and forth between solving a real problem and finding just enough information to move forward one step. After some period of this, I'm usually able to read (and understand) a book or two on the subject, but never before. Maybe I'm a tactile learner? Not sure what to call it, but if you're not sure what kind of learner you are, try a variety of approaches and don't be discouraged if manuals don't work for you.


This may seem like splitting hairs, but both to you and to anyone in a similar situation, consider instead "skimming" the documentation rather than "reading" it. A lot of times the value of that first "reading" is just in getting a sense of what can be done and an idea about where it is documented. Certainly not memorizing how to do it on a simple read. There's a very popular unexamined idea that people retain things by reading them once and then will forever after retain it, but that idea is obviously stupid once I drag it out into the sunlight and point you at it. That's not the goal of a first read, and it's not what the vast majority of us get out of a first read. (Anyone who can operate that way is invited to make their own plans and let us normies discuss how to deal with our normal reading retention levels.)

I probably "skim" the bash man page once every couple of years and I still find new things that I blipped over every other time.


This is a great clarification. I actually do this a lot of the time (not always), but I do tend to skim the TOC and scan around before starting a new thing and it really is helpful.

As you said, it's a different than RTFM'ing, but it's an important clarification.

Thanks!


Same here. If I were to read docs cover to cover, not much would stick in my brain. I need top down research, start high level, and drill down where ever I touch base with practical applications. It's why I find interacting with LLMs so pleasant (ignoring their shortcomings), because I can learn exactly the way I want to and the way that works well for me.


The knowledge from reading all the docs won't be really "usable" but it gives you a vague, almost subconscious idea of all the things that exist and how they interact with eachother.


In general it is almost always better to learn using multiple styles instead of just one, if you prefer to do A then don't just do A but also try to do some B to improve your learning. Learning based on solving specific problems is good to get experience, but it is not good to get an overview of a field, you really want to get good at both the overview and the nitty gritty.

Also the purpose of reading a book isn't to become a master at implementing details, it is to get a good overview of the subject. You still have to practice implementations afterwards, but by having seen everything before your mind will now slot all problems much better and be more confident as you work through those implementations instead of looking up random advice on the internet.


I think of this as bottom-up vs top-down learning. I get frustrated working bottom-up one bite at a time; I need a map, I need to know the lay of the land and where I'm going. The older I get, the more I sense that bottom-up learners have an advantage, some sort of worse-is-better just-get-started kind of thing.


> I need a map, I need to know the lay of the land and where I'm going.

My visual image is a tree. I first need a rough shape of the trunk and branches, before I can start adding smaller branches and leaves.

Bottom-up then feels like having a bag of random leaves, instead of a tree.


This is what I mean when I talk about different kinds of brains.

Some of us need a certain amount of clarity and structure before we can feel good. This a great! Especially if we're aware of what we need and figure out how to get it.

Others of us have trouble building from the roots, and we're more effective probing around in dimly lit fog with occasional bits of moonlight illuminating leaves.

Eventually, we may both find ourselves in a well-lit forest, but we approach from different starting points.

The important thing (for me, at least), is to know which path I need at a given point in time and for a given context.


Well, at least it's obvious where this is coming from.

> You will look like a genius because you'll have an encyclopedic knowledge

> You can learn esoteric knowledge about programming

– and you will pay an incredibly steep price for it.

If you want to be effective, stay clear of this one. Systems are going to be increasingly complex to the point where it's absolutely impossible for any one person to understand it all and your best bet is getting efficient at poking them to figure out what's going on.


How so? Knowing as much as possible about the systems should be the norm, it is a super power because most people are too lazy to be bothered with this.

Knowing the system makes it very easy to poke the relevant people, find the relevant people and have meaningful discussions with those. Heck, it might even turn you into a meaningful person yourself.

The alternative is stumblong aimlessly around and parroting input from others without understanding the smallest thing about it. This is something LLMs excel at, for free. The former not so much.


Because mental capacity is finite, and knowing as much as possible doesn't work for everyone, especially if it's disconnected from actual problem solving. Reading a book cover to cover and have everything in there stick in the mind is great if it works for you, but there are plenty where it does not.


The problem isn't that mental capacity is finite, it's not (or the limit is too high to be relevant). The problem is that by spending time learning esoteric knowledge, you're only displacing time for more applicable knowledge.


It saves you time, reading is quick, debugging without a good understanding is extremely slow.


The margins are infinitely expensive. "As much as possible" has no meaningful boundary, when working in a system that is too big to hold in your head.

Mastery is alluring - it's just not very effective and certainly really bad advice for new software devs, who are in the worst possible position to judge the margins and what is useful.


Exactly why new people, new to the job in general or just the system in question, need mentoring and leadership in doing so. And guess what you can do if you went through all of this? You can mentor and teach others, and if combined with doing, there is no better way to mastery than this, at least not one I am aware of.


You should try this for C++ and Linux. Read the spec of each cover to cover.


Of the language? Propably not. The documentation for the piece of software written in C++ you are working on? Absolutely yes.

Edit: Regarding Linux, you are propably not touching every function or component of Linux neither. The less you are concerned, the less important it is for you. And less you uave to read it. That being said, a fulltime Linux OS dev propably should have a solid understanding of the complete OS to begin with.


Probably not? So your advice is wrong.

You are obviously picking and choosing your documentation based on how easy it is to read. So your advice isnt universal. Clearly you don't fully follow it.

There's a lot of contrarian opinions here due entirely to the fact that people won't read certain documents for the same reason you avoid reading the c++ spec.


I don't need to read C++ docs, I am not concerned with it. If C++ is part of your, what's it called, stack, read the part of it that matters for your use case and read those parts directly related to your part of C++ or whatever other language you are using. Do that for every other language in your stack. I never said anything else, did I?

How deep and broad your understanding has to be depends on:

- your team, nobody can know everything alone, it is a team sport

- your specific use case, I cannot help you with that

- complexity of your use case

- your role, if you are responsible for graphics under Linux, sure as hell you read that part of the Linux documentation, front to end, multiple times and master it

- and, as always, know your system, stack, tools and documentation well enough to realize when you have to look stuff up and where


>your specific use case, I cannot help you with that

I don't need your help. You need it. I'm helping you realize your statement is wrong.

First you make a statement saying one should read all the docs. Now you say devs should read the relevant docs. Devs do the later anyway. Your argument just evolved into the point you're arguing against.

Clearly you didn't mean to do that. You just meant to read more docs then you normally would on topics a little further from the relevancy at hand. But people are responding to you based not off what you meant but what you said.


All the docs, if taking literally, would mean all the docs for everything you interact with in your life. Obviously impossible, isn't it?

Not every statement has to be taken literally, it seems so that on HN, more often than not, one has to be incredibly specific in ones comments. That is like talking to genie or something, really frustrating. I kind of assumed, and without going through all my comments I think I also mentioned it, that by "all the docs" the meaning was all the relevant docs, I even specified that explicitly later on, didn't I?

So, ehy exactly do you think that reading jib and task relevant documentation is not necessary, or even the comoletely wrong approach? Seriously curious, because I run into such people ever so often at work and usually fail to explain to them why they actually have to read that stuff if they want to be a usefull member of the team. Understanding why they have that opinion would really be helpful.


>All the docs, if taking literally, would mean all the docs for everything you interact with in your life. Obviously impossible, isn't it?

No but even if C++ is the central language of my stack your comment can reasonably be interpreted as suggesting me to read the entire C++ spec. That's not an outlandish interpretation given how many people interpreted what you said this way.

>So, ehy exactly do you think that reading jib and task relevant documentation is not necessary, or even the comoletely wrong approach?

Did I say this? No.


Man, you just did. If your job is C++ development, yes, I absolutely expect you to be familiar with the full C++ documentation and master the parts relevant for your specific usage of it. If C++ is only part of your stack, replace familiarity with C++ specifically with familiarity of your whole stack.

I do the same with everyone, actually, myself included. Plumbers have to know the specs of their tools, materials and the regulations as well as principles of installation. And they do, the good ones at least.

And the easiest way to get that familiarity is reading the damn documenents. If you cannot be bothered with that, well, let me say I am happy I don't have to work with you.


>Man, you just did.

I just did what. Be clear with Your sentences.

>If your job is C++ development, yes, I absolutely expect you to be familiar with the full C++ documentation and master the parts relevant for your specific usage of it.

So you were unclear. You want someone to master only parts of of the stack but read the full spec. This is completely inconsistent with what you said earlier. You are moving the goal posts. Still it doesn't Make sense to read 1800 pages of the C++20 spec.

Do you read the English dictionary because you use English?

>And the easiest way to get that familiarity is reading the damn documenents. If you cannot be bothered with that, well, let me say I am happy I don't have to work with you.

Then you would be happy to not have to work with the overwhelming majority of programmers on the face of the earth. You being in the minority would make you the problem. Not others. Programmers in general read the spec and docs relevant to the task at hand. They do not generally read the full formal specifications and documentation of what they need.


How much is it possible to know about the systems? The app I work on now uses MariaDB (multiple versions at this point), postgres, Cassandra, and InfluxDB. Those are just the databases I can recall offhand; there may be others and there were others before my time at the company. And those are just the databases. I’m also responsible for hardware, networking, cloud infrastructure to include IaaS and container orchestration plus tons of services. Say it I should read the docs on all that is like a cruel joke. Not only are there not enough hours in the day, I doubt there are enough years in my life to imagine such a task. And it would be Sisyphean anyway because at any given moment one of them will make a “minor” update that breaks functionality and a bit of my knowledge will be useless. Welcome to the world of postmodern system administration; I guess we call it DevOps or SRE now or something; even that’s a philosophical debate these days.


Is it laziness or is it the fact that brains are different and many people (me included) can't absorb that much material and then retain it as well with so many other things going on in our work/lives. I would struggle just to read the entire docs, front to back, of anything in a sitting. Then to retain all that?

"lulz" -my brain


The point is not retaining it all so. The point is to remember where to look, and know exactly which parts of the documentation are relevant to you, and retain those. Added bonus if you know owns the stuff not relevant to you and if you understand how those parts are linked.

That is, honestly, something I expect from people in a professional environment.


The ability to retain information is mostly fueled by your interest towards the topic, and not by your inherent ability to retain in general. This is what neuroscience has figured out. And it confirms very much my own observations during my life time. Inherently uninteresting topics or things I was forced to learn but which never resonated with me were always difficult to impossible to remember. Things I care about, or which interest me greatly, were almost easy to learn. Reading a document about an interesting topic cover to cover is easy. Keep that in mind next time you think you have a hard time learning something.


It isn't as much learning as maintaining focus. I struggle badly with that regardless of my interest in a topic.


I submit maintaining focus is also easier if the topic is inherently interesting. However, you sound like yu are describing something like ADHD, which probably needs a more specific approach.


Reading is actually cheap. You don’t have to understand everything, but being aware of the area map in general helps avoiding reinventing the wheel or looking for complex paths.

Systems are going to be increasingly complex

Coincidentally that’s where compensable expertise is.


C++ 20 - Number of pages : 1853 ( https://www.iso.org/standard/79358.html )

Python language - Number of pages: ~206 ( https://docs.python.org/3/download.html )

Python standard library - Number of pages: ~2337 ( https://docs.python.org/3/download.html )

I don't know, but I have a suspicion you haven't read these cover to cover. Or maybe you have, for the latest version 10+ years ago. Is your advice to read the diffs when they update?


Is your career in programming in Python? Then perhaps taking 4 weeks to read the Python docs a few hours a day, is worth the 30+ years you will be using that knowledge 40 hours every week (60,000 hours of work). And yeah, changelogs are there for a reason.


I've read the Go spec. It's short and sweet and doesn't include the standard library, though.

https://go.dev/ref/spec

Also the ecmascript-262 spec version 5.1, also known as es5.

https://262.ecma-international.org/5.1/

They were actually pretty interesting reads, and not too long. Maybe I should see if there's a PDF of the Node.js and Go standard libraries.


Looks like this might be what you want at least for Node.js: https://nodejs.org/docs/latest-v17.x/api/all.html


Many people's brains cannot absorb that much information. If I sat down and read full docs front to back I would remember maybe 3 things. I don't think this is overall practical advice. Then reading the source code too for all this stuff I like? I don't understand where one finds the time for all this? I'm not putting you down, I just genuinely am confused. I barely get my stuff done as it is each day haha


This is surely an exaggeration. 3 things? Really? Surely you would remember much more than that. And I don't mean this in a snide way. I think I just have a much more optimistic view of what the average person is capable of in regards to reading.

I think there is plenty of time but maybe not in the types of work environments that are the norm nowadays.


> It takes more time the first time, but it saves you time for the rest of your life

It saves you time until the next version of the software gets released, or until the software is superceded by some other, shinier software.


Examples of things where this has worked for me: linux man pages, internet RFCs, ECMAScript spec, RDMS knowledge, math and science theory.


Until the piece of software, or the system, in question is migrating, why would that matter?


Most documentation is not worth reading. It’s reference material to be reviewed on-demand. The “introductory” section that gives an overview is often useful, though.

But that leads to the other problem: knowledge is not intelligence. You’ve studied some Python module but never learned that you ought to use a different one instead.


Agreed. Documentation isn't a novel or essay that you need to read to the end for it all to make sense.


Work fragmented my brain so much as I am into so many things at one time and I am sure a lot of us feel like this. This comment makes me think on how to actually do that and how much time and efficiency this would cost. Nevertheless there is truth in it. How do you manage this?


Not OP, but that's how I do it (non-software stuff, but tech documentation is complex for any product, and if it is not there is no real problem):

- get someone on the team to give an introduction to get some basic knowledge of the system in question, where to find things and what portion of the system affects you

- read the documents relevant as per the previous point

- read cited documents

- start working, and look up stuff everytime you have to, dig as deep as possible

- unless you are 100% sure about something, look it up

- rinse and repeat, that grows your knowledge from immediately relevant stuff, to related stuff to general knowledge of the system

- ask question, always, be curious, listen, read and ask questions

It is a team sport, you don't have to know everything, master your stuff, and your interfaces with others (functions, sub-systems, teams and people, get a solid understabning of the overall system and relly on other like you for there parts.

Going alone, based on gut feeling and assumptions is not something I'd advice.


Right, but the original commenter is saying read all the docs, front to back. That's just nonsense.


Maybe hyperbole, because reading the docs front to end is what I do. Starting with the ones identified as immediately important and taking it from there. Nobody said this is to be done in one session so, it takes time (I am currently in month 6 of such an excersice and can't even tell what I didn't read yet, unkown unknowns and such).


You probably severely underestimated how long documents are today.

Also reading something cover by cover is not enough to retain the information. Not even close.


Yeah, one of the documents I regularly refer to is 1600 pages on its own. It refers to dozens of other documents that are 1000+ pages each. That's just the hardware though. The languages running on it each have standards docs, which are further hundreds to thousands of pages of material I regularly refer to. Those languages are used to implement software standards, which might be thousands of pages per volume, and have dozens of volumes.

It doesn't matter how fast you read, you're not going to be able to read or retain all of that info.


> Read the documentation. Don't skip over it to the part you want; read the whole thing, cover to cover.

I think there are times when this works and there are times when it will be a huge waste of time. In general, I think it's often very hard to tell which approach will be more efficient for you. As some sibling comments have mentioned, I think the way each person's individual brain works is a significant factor, but I think there are other potentially unknowable factors that are significant as well.

The best thing I've come up with to deal with this is to use an iterative-deepening-like approach. There's a reason this algorithm (pre AlphaGo) was the most common approach for many game playing programs. The general idea is to go a ways down a particular path but always keep in mind some notion of the global suitability of this path and when it starts to look too hard, back up and investigate some other approaches to at least a shallow (but a little bit deeper than before) depth. This lets you avoid potentially costly dead-ends for relatively low overhead.

(These thoughts inspired from this nice talk: https://www.youtube.com/watch?v=Z8KcCU-p8QA)


I think a better approach is to be prepared to read the documentation a few times.

For me, the first time I mostly glance through and see the entire scope of things. I normally need to work with things at least a little bit first to really be ready for a more indepth read later.


Oh yeah, memory is not perfect, and I think that very often you'll have to work with a concept multiple times to fully absorb it. I think we're saying very similar things.


These two points (docs and looking at source code) Started working for me only few years into the career, when I had a really good grasp of foundations


While I am not this extreme, I do think that there is value to having a set of tools that you really know and have read the manual for (for me it's git, common unix commands, Postgres and your "main" programming language). If anything, it gives you an appreciation for programming languages that are not only simple to write but also simple to reason about.


> encyclopedic knowledge

This only works for people with excellent long term memory. I constantly need to look things up in docs and even my own readmes.


About once in a month, I google something and find a StackOverflow question about the exact thing... which is asked by myself years ago.


Ha, same. And sometimes I come across a friends issue on github or SO. Always amusing.


But you know where to look it up, right?


I think it's better to read the docs for software you have to touch or are especially interested in rather than everything everywhere in your stack, and this is especially true if you are a beginner. Reading about what you touch makes you a better programmer, while reading about what you like makes you a happier one because your future work will often be in your area of expertise.

If you read at random it tends to send you down rabbit holes, unless you're grounded by applying your knowledge or you have the gist of the subject already. Rabbit holes are delightful if you're interested in them, not so much if you're going down them out of duty.


This. People don't RTFM, I make wikis that are intended to be used by the team and new hires, I make them as short as humanly possible, even suspending grammar and using sentence fragments. Yet no one reads them.


this might be a discoverability problem. my wiki readers went way up when I linked to it from code as well as relevant internal admin surfaces.


Reading source code also helps steering you away from unfit projects. I've decided to replace dependencies after trying to submit a PR and realizing the code was not as robust as I hoped, or was doing too much.


Might work for a toy programming language and framework, but not for anything like C# and the .NET framework (especially when considering the different implementations), Java, etc.


> Might work for a toy programming language and framework, but not for anything like C# and the .NET framework (especially when considering the different implementations), Java, etc.

This is rather an argument against C#, .NET framework, and Java. :-)


Good luck telling your boss/client that you don't write C#/Java because "the document is too long to read cover to cover."


Advice like that above is honestly terrible for newer developers, I believe. It feels insurmountable for many people, myself included. I simply don't have the brain capacity to:

A. Read an entire language's docs front to back. As you mentioned, maybe if it is some small, silly thing but for something like Python, Java, JS, etc... no fucking way. My brain would constantly glaze my eyes over. There is so much.

B. Retain the information I just attempted to read. Again, there is so much. It's insanely unrealistic for most people, I would think.

New developers see advice like this and immediately feel like they aren't cut out for this because people make this nonsense sound like something 'YOU MUST DO' to be a good developer. It's toxic, in my opinion. Maybe not intentionally so, but it really can kill a person's excitement.


My usual recommendation is to read the table of contents of the manual.

This is especially important for any database you might be using. Language... meh, maybe...? But the DB - please do.


Reading the docs is a bit of a superpower, but life's too short to read everything cover-to-cover. What's worked better for me is to skim everything so I have some idea what's in there. Then, when I need to do "X", I know to check in the docs.

There's a balance to be found here that's likely different for everyone.


Which documentation? If I read everything, it would take a lifetime.


it saves you time for the rest of your life

Also saves you from fragmented knowledge. When it is good enough, it will remain half-assed till your end.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: