Hacker News new | past | comments | ask | show | jobs | submit login
On Learning Rust and Go: Migrating Away from Python (liw.fi)
240 points by edward on March 24, 2019 | hide | past | favorite | 337 comments



15Kloc and trouble is indicative of other problems than a problem with the language. I wrote multiple 50Kloc and up pieces of software in GFA Basic, arguably a much more limiting and unsafe environment than Python ever was, and yet, that software worked well and was maintainable to the point that its descendants still run 3 decades later.

Python has all the bells and whistles you need to build large code bases, but you do need to get organized, no language can really do that for you and attempts at forcing you to do it give rise to 'factory factory' nonsense. I'm no huge fan of Python, I use it when I have to but Rust is from a bells-and-whistles point of view a step down from Python, it is more a better 'C' than a better Python and Go has it's own share of issues that has kept me away from it so far.

If you want to crank out some research project or application quickly building on the libraries that others provide (especially in machine learning or other data intensive fields) then Python would be my first choice, and something like 'R' probably my second (though I see 'R' mostly as a data scientist's workbench that has a programming language bolted on).


Most of my serious work has been done with statically typed languages, though of course I have worked with dynamically typed ones as well. There are a few things about dynamically typed languages that seem to productivity-negative, but it may just be my lack of understanding of how people expert at a dynamic languages work with them.

1 - When working with a large enough code base that you don't remember exactly what every function you have written needs, or using third party libraries, function signatures lacking type specifications make it harder to know what is expected. if there is documentation that is good, you can of course load that up, but this slows you down. IDEs today can usually show you the function signature as you type it, but with dynamic types you get less information.

2 - When refactoring something in a statically typed language, changing a data structure or type name will cause the compiler to error every single place that you need to adjust for your change. With a dynamic language you need to rely on unit tests or you will be finding places you missed during runtime for a long time afterwards. If you do rely on unit tests for this kind of thing, then you have sort of built an effectively statically typed environment, at some large degree of effort, with worse runtime performance to show for it.

I understand some of the workarounds people use to deal with these problem but a lot of them seem to push you to the point that you might as well use static typing and get better runtime performance for it.


Python has type annotations since 3.5 and you can get static type checking via mypy. This is a type system that works quite well IMHO and eleminates most of the issues you mentioned.


As someone who does most of their professional development in Python, I’ve been eagerly waiting for something like Mypy, but my experience with it has been really disappointing and frustrating compared to languages like Go. Mypy mostly seems immature, buggy, and completely unergonomic (typing support was shoehorned into the syntax so most things chafe—declaring a type variable or a callable that takes args and kwargs). Python needs a better static typing story in order to be a productive language in medium-to-large-sized projects.


As someone who works full time in Python and TypeScript, I find the typing experience quite similar in both languages. You can be "productive" in both languages "in medium-to-large-sized projects".

> Mypy mostly seems immature, buggy ...

That was my experience some time ago as well, but it's getting better from release to release. Look at the repo [0], it's continuously improving. I find Mypy deserves more recognition.

The same guys working on mypyc [1], which I think is very interesting too.

[0]: https://github.com/python/mypy/graphs/contributors [1]: https://github.com/mypyc/mypyc


That’s great to hear. It’s been a while so I should go back and try it again!


The main benefit of Python is the ecosystem of libraries. Those libraries don't have type annotations. Mypy is a far cry from a real statically typed language.


There is typeshed for stubbing libraries that don’t have their own annotations, but there are still several popular libraries that can’t be annotated—like SQLAlchemy or boto—because they generate their own types at runtime.


The problem is that WAY too many projects are still on Python 2.x.

Either some weird support library hasn't been upgraded to 3.x yet or some expensive piece of software/hardware can't be accessed with Python 3.x


Have you used Python and Python libraries recently? That used to be the case, but ever since the Python 2 "death clock" has started this has become a fairly rare occurrence, except in some specific domains.


I think that you are right, but I'd say that the underlying issue behind these points is the number of people working on the project. Dynamic languages mean that you can rely on implicit knowledge, rather than be explicit about everything.

If your project is just _your_ project, then dynamic languages let you start faster and produce quicker. Apart from anything else, these languages attract people who value productivity, so clunky tools don't survive in those ecosystems.

Of course, almost no successful project stays in the hands of just one or two people: new developers come in, or it's Open Source and more people show up, as well as the "drive-by" contributions.

IME, Ruby on Rails projects scale to about three core developers before you start to feel pain. The strict conventions of the framework help mitigate the communications issues of keeping people in sync. Beyond that, the implicit knowledge that is not in the codebase is harder and harder to scale.


I've worked on a 100kloc statically-typed project yet I can definitely see how static typing helps even with a much smaller code-base: I can't write a thousand line of JavaScript without encountering at least one “undefined is not a function” or equivalent error. Call me a bad programmer if you want, but I do find static typing helpful (and Rust's one is really good compared to say Java, C# or Go).


From glancing at the obnam code it looks relatively okay: https://github.com/lukipuki/obnam/tree/master/obnamlib

The one thing that stands out to me is that it's heavy on the unit tests and light on integration tests which would make an application like this (at least for me) more difficult to handle. I'd go heavier on integration testing and lighter on unit testing.


I disagree. 15k lines of code is a lot of code to keep entirely in your head all at once, which is basically what you have to do if you're using a language as dynamic as Python. In static languages like Java you can easily use tools to check types are correct, find usages of variables, jump to definitions and so on. And you get compile time errors if you screw up, rather than runtime errors.


I've been doing some Java lately, and it seems to me like Java libraries are often doing (and making the programmer do) a lot of work just to get back the dynamism that Python has by default. Many kinds of errors are only detectable at runtime:

* dependency injection errors,

* template errors,

* magic annotation malfunctions, etc.

The last one in particular actually seems less safe than Python's decorator mechanism. Java splits it in two parts, so that the annotation classes used at compile time are separate from the library that will process them to actually do something. If the latter part is somehow broken, the code will just do nothing, so you don't even get a runtime error.

In Python, decorators are just functions that serve as annotations as well as doing the actual work. There is usually no separate processing part that can go missing. I've never encountered similar problems in Python.


You seem to be describing things that frameworks like Spring add. Java itself (as a language) doesn't "do" anything with annotations, they are literally just some extra strings in the class-file. Spring and its friends use those strings to do their magic. (Personally I find modern Spring applications as inscrutable as RoR applications which does indeed throw many categories of debugging out the window. Can't set a breakpoint if you have no idea what bootstraps what.)


Right, that's what I was saying. Python does do something with annotations, so you don't get the mysterious magic malfunction that you can get in Java.


Most of the above is due to a proper IDE. Python, too, has one: PyCharm. Comes with 'find usage of variables', 'jump to definition', types checking, syntax correction, import optimization and much much more. 15k with pycharm is a walk in a park

\*More than one, of course. But I use only PyCharm :)


It's nowhere near the capabilities of a good Java IDE; being Python, it's very heuristic based rather than definitive. I know that searching for a variable in Java would get me all instances, no ifs no buts. Not so with Pycharm (or any Python IDE).

I support the notion that Python is unsuitable for larger codebases with all my heart.


Surely not.

I can write reflection based Java code that will make automatic renaming choke.

There are ifs and buts. They may be uncommon, but they're around.


It depends on the language. The more metaprogramming is used the harder to automatically refactor.

While you can do reflection/code generation even in Java, for the non-framework code almost nobody does it.

That's why I feel lot safer with large codebases in Java, even though I love some dynamic languages too. I don't think they're as scalable for big programs though.


> 15k lines of code is a lot of code to keep entirely in your head all at once

There's your problem then! You don't write a codebase that size in such a way that you need to keep it all in your head at once. The whole point of 'getting organized' is to reduce the amount of code that is in scope for you to have to understand. If it's more than a page or so then you will always end up with hard to debug problems. Reducing the amount of code in scope is one of the most powerful tools you have to deal with large codebases.


> a lot of code to keep entirely in your head all at once, which is basically what you have to do if you're using a language as dynamic as Python

...you're doing it wrong.


No I'm not.


15K doesn’t seem like much. I’ve spent the last 5 years keeping 250-300K straight in my head, and have never struggled with it. 15K would be a delight. I can recognize hitting a personal limit at 300K, though—not that I can’t hold more, just that I lose all interest in doing so. That seems to be the point where I grow tired of a problem space.


“15k would be a delight”, implies that 250-300k is not a delight.

It sounds like your loss of interest may be correlated with your level of discomfort with increased cognitive load.... the alternative explanation would be strange: you can only be interested in simpler problems/implementations. (Which I doubt)


No, it’s nothing to do with the cognitive load. And I said nothing about discomfort. That’s a pretty interesting interpretation you’ve drawn.

I build things from scratch. By the time a project hits that size, I’m utterly bored with it. It’s been figured out and built. It’s been refactored. It’s been tuned in places that need performance improvement. It’s had new features added in. But fundamentally, the idea(s) behind the software have become completely boring. 5 years of working on the same thing through multiple iterations—including reducing LOC while expanding features. There’s a certain project size that I’ve found correlated with time ticking my internal interest clock to 0.

It’s a bit like how Professor Farnsworth feels when there are no more questions.

Edit: I’m not very good at describing my feelings. Delight was a poor word choice. For me, 15K is typically early in a project, when there is still delight to be found in figuring out the problem space. The unanswered questions are delightful, interesting, thought-provoking. Bringing Professor Farnsworth into it, i was reminded of him saying it was the questions that drive him.


LOC is a very bad estimator of complexity. There isn't a need write 100K LOC to figure out "the problem space" either.


I never suggested LOC was an estimator of complexity. Heck, I’m not saying anything about complexity at all. Nor did I indicate there was any need to write a single line of code to figure out a problem space. You may be taking my statement either too literally or out of its intended context.


> Python has all the bells and whistles you need to build large code bases, but you do need to get organized, no language can really do that for you and attempts at forcing you to do it give rise to 'factory factory' nonsense

This is almost satirical.

> Assembler has all the bells and whistles you need to build large code bases, but you do need to get organized, no language can really do that for you and attempts at forcing you to do it give rise to 'factory factory' nonsense.

I can definitely see someone replacing python with any X language.

Just because you can do something, doesn't mean you should. I personally like opinionated languages that have made some decisions for you based on best practices and research.


This is almost satirical.

I didn't read it that way - but your retort reads as quite definitely snarky.

Either way -- could you condescend to explain to us what was so outright ludicrous in the sentence you quoted?


Hey there. Imo you are far more condescending than I ever was, it is possible though that you missed the point. Let me explain.

The parent I was replying to said this:

> Python has all the bells and whistles you need to build large code bases,

but you do need to get organized, no language can really do that for you and attempts at forcing you to do it give rise to 'factory factory' nonsense

I have a problem with this statement which I hoped to convey by satire (but obviously failed).

The argument being put forth by OP is that python's lack of static compile time typing makes it hard to build large code bases. This is a legitimate argument supported by several whitepaper research papers conducted by well known and trusted entities. The parent I replied to countered this with "but you do need to get organized", but neglected to specify any details about the said organization.

Obviously I have a problem with this because the statement is utterly meaningless.

You can make this type of statement about literally every language. I picked assembler to demonstrate this point. Also, it is low effort jab against Java programmers, as if they are clueless about organization.

> Assembler has all the bells and whistles you need to build large code bases, but you do need to get organized, no language can really do that for you and attempts at forcing you to do it give rise to 'factory factory' nonsense.

That statement is equally applies here. Assembler obviously has all the bells and whistles you need to build a large code base, given that it's turing complete and that people have in fact built large code bases with it.

So in summary, the parent I was responding to:

1. Made a vague statement that can be applied to any language.

2. Made a low effort snarky remark about Java developers without providing any support/proof.

3. Said you need to get organized in order to write large code bases in python, provided no further details on the organization.

4. Contained no useful information at all that can actually be applied and used to write software in python or any other language.

5. Tried to counter a legitimate, research supported position with a vague meaningless statement.

I think given all that the satire was warranted.


I'm going to have a go with Rust and Go too, but for different reasons.

IMO Python is still the best language to go with when you are writing anything "scripty":

- Stuff that's not expected to be long. Do one thing well, let some other program do another, compose the higher order functions from functions that work. Of course this isn't always possible.

- Stuff where the types are not going to confuse you. Often you just have a few types, and it's fairly obvious what you can do with a given object. If you get too many types, there's a good chance it will start to get confusing and then strong typing would help.

- Stuff where performance is not a problem. Don't write your HFT system in Python. You simulation though, might benefit from orchestration of some VMs.

The ultimate use case for Python is glue code. And guess what, a lot of code is just glue. A good glue language has a large ecosystem of libs, and some language features that make it easy to read. That's Python in a nutshell: whitespace formatting makes things easy to read, and you get built in list/dict syntax to keep everything short and sweet.

For me Rust and Go are interesting having come from C++. There's a lot of pitfalls in C++ that might be helped with Rust. At the moment I have a whole load of Valgrind suite tests that look for potential problems. Perhaps Rust can bake those into the language? I haven't had time to look at it yet.

Go seems to be interesting as a compromise between easy and fast, and by the sound of it things have turned out well. I have people on my team using it, and it seems to be the thing to do for web services that need to be faster than nodejs but you don't want to spend a load of time making sure a c++ version is correct.


C++ versions are never correct


There is one and only one language that's both correct and the best tool for scripting: Bourne or its superset the Korn shell (not even bash).


You probably mean the POSIX shell [1] ;) I never understood why folks venture into bashisms for no or little gain (bash isn't even the default shell on Debian/Devuan), or would invent/use entire new languages with singleton implementations such as Perl in the 1990s or Python in the 2000s (other than for fun, of course).

Note that Python is actually a descendant of the ABC programming language developed to be human-friendly, not a shell replacement language per se.

[1] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...


No, I don't "probably" mean POSIX, if I write Bourne and Korn shell, then that is what I mean! I grew up on real UNIX operating systems, not the GNU/Linux cheap knockoff! Boy do statements like these make me furious!


What do you have against POSIX sh? Unless I'm mistaken, it's compatible with both the GNU/Linux knockoff and Unix (*BSD, etc).

I prefer it to Korn and Bourne because it's compatibility is so widespread.


I have nothing at all against POSIX, but unless one is running HP-UX, Solaris or an illumos-based OS like SmartOS, POSIX compliance is spotty at best in other operating systems.

For this reason, the safest thing to do is write code for either the original Bourne shell, or ksh. Programs written in ksh are the most portable across systems, moreso since other systems have it too. And to top it all off, ksh sets the standard for POSIX, especially ksh93.


From Google's Shell Style Guide[1]:

"If you are writing a script that is more than 100 lines long, you should probably be writing it in Python instead. Bear in mind that scripts grow. Rewrite your script in another language early to avoid a time-consuming rewrite at a later date."

[1] https://google.github.io/styleguide/shell.xml


Well if Google says so ...

Python brings a ton of dependencies, thereby seriously limiting your deployment options. And, together with Python's not-so-great package management story, this spells a lot of trouble for as long as Python 2 and 3 need to coexist. Many useful Python 2 programs (such as Apple's old caldav server based on twisted) won't be ported over to 3, and the migration to Python 3 creates an unnecessary burden on apps that chose Python as user-scripting language, such as Blender. There might be reasons to use Python, but the aesthetics (or lack thereof) of a scripting language, or an artificial limit on the number of lines most definitely isn't one of them.


Google has no business writing a programming guide for something they aren't competent for; the guy who wrote this is about as clueless as they come when it comes to shell programming, for example he thinks using eval should be forbidden for security reasons. And it's a guide by some company, not gospel.


Because shell scripts usually have lots of dependencies on external tools, it's harder to make them portable. Those external utilities have different behavior on different platforms. I have much better luck with Python, Perl, etc, for scripts.


Shell scripts depend on what is in /sbin and /usr/bin, so software which comes with the OS. If you are struggling with that, then I warmly recommend the "Learning the UNIX operating system" book from O'Reilly to get the basics down. Your argument makes no sense.


You seem to be assuming that Unix like utilities have the same options and work the same way everywhere. They don't.

Shell portability issues are a thing. Also, why all the snark?


> Note that I've not written any significant code in either language, so I'm just writing based on what I've learnt by reading.

So he’s basically comparing 20+ years of Python usage with marketing material from Go and Rust. Right.

I wish people could be honest with their motivations, instead of making up excuses to justify this sort of change. Here it’s a classic case of “I got bored and I don’t like the new features, want new shiny”, wrapped in a bit of whining about problems that have well-known solutions (use a good IDE, use type hints or one of the many libs that enforce types...). I have no problem with that, just don’t try to sell it to me on technical grounds, because it looks very much like you have none.

Python is still one of the best ways to get stuff done, quickly and with a minimum of boilerplate. Its limitations are well-known and have not really increased for about 15 years - if anything they have shrunk in many, many areas. Adoption rates keep exploding, despite a lot of doom forecasted by some alpha geeks about 10 years ago: “There is no mobile story! We will all be using Ruby/Go/NodeJS tomorrow! Nobody will ever go to Py3!” Etc etc etc.

You got bored by it? That’s fine, it happens, no harm done. But you don’t need to convince anyone, including yourself, that you’re moving on because of this or that flimsy excuse.


> I wish people could be honest with their motivations, instead of making up excuses to justify this sort of change

Why do you assume motivations other than what the author claims? I very much started on Python, but as my career progressed, I got tired of being frustrated with poor library documentation and having no static types, making it extremely difficult to figure out even the parameter types to pass in, especially after it's been a while.

After switching to static typing, things have been a lot less brittle and even faster to develop. It is often said that Python is fast to develop in, but this is really only true for prototyping. Making a program and making a robust python program are entirely different, whereas they're much less separate in a statically typed language.

Also, statically typed does not imply FactoryFactory or no type inference, especially for languages like Rust, Swift and to a certain extent Go as well. The code written in Rust, Swift etc. could often be as elegant & short as one in Python, without loosing the benefits of static typing.


Wow. It's amazing the lengths people go to to maintain their delusions. Show me one python IDE in action doing effective refactoring across a large, nontrivial python code base. And you didn't even mention concurrency and skipped over it altogether!

> use type hints or one of the many libs that enforce types...

As if that's pythonic. Heard of duck typing? And those are things you have to do yourself, as additional work and maintenance rather than getting it free because it's a feature of the language. And you don't always have full control over the stack.

Is there an IDE that will guarantee that no runtime errors will take place due to typing?

> Adoption rates keep exploding

Because python made its way into the rapidly growing machine learning market. Python is great for writing glue code/scripts because it's easy to extend with C. Everyone and their grandma getting into data science use python.

> I have no problem with that, just don’t try to sell it to me on technical grounds, because it looks very much like you have none.

Looks like you are just not willing to accept or even recognize any valid criticism of Python. This type of fanaticism towards the language is not the first time I am seeing it.


You crossed into personal attack here, and incivility elsewhere in this thread. That's not ok on HN. Would you please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting here from now on, regardless of how wrong someone is, or you feel they are?


With regards to refactoring I’ve never had a problem using pycharm for refactoring python. But that is literally the only tool I can say that for.


i’ve used pycharm to refactor 100k+ LOC bases.

pycharm also uses the type hints that are PART OF THE LANGUAGE (aka pythonic) as of python 3.x and enforces them.


I've used Pycharm extensively in production. It's hands down the best Python IDE, but the refactoring is nowhere near as reliable as with IDEA, Golang etc. Having a static type system provides refactoring certainty that simply cannot be achieved with a dynamic language.

You cannot do with heuristics what you can do with static types.


Pycharm is built on top of IDEA.


Yes, I know, but it doesn't provide the same quality of refactoring as IDEA can for Java due to the dynamic nature of Python.


[flagged]


ok. tell me what i’ve done and haven’t done. you’re the expert.


> pycharm also uses the type hints that are PART OF THE LANGUAGE (aka pythonic) as of python 3.x and enforces them.

Just goes to show you how incredibly messy the transition from 2 to 3 really is. Duck typing and typing hinting are two fundamentally different approaches of doing the same thing that rub against one another. That's one more thing adding onto the long list of things that make it a pain in the ass to migrate from python2 to python3.


They're optional, and it's possible (and not unusual) to use them in Python 2-compatible code with an alternative comment-based syntax and a backport of the typing module.

It's possible to start using type hints without migrating to Python 3, and it's possible to migrate to Python 3 without using type hints. They're almost orthogonal.

It's weird to bring it up like this - do you have any experience migrating code to Python 3?


I’ve done all of that. I guess it depends on the codebase, but transitioning to Python 3 wasn’t so difficult, I suppose it should only be easier now than it was in 2016/17.

Coming back to Python in 2019, it seems almost mandatory to be on Python 3.4+. I can not imagine someone still using Python 2.7. Except for this one place I worked that was stuck in the last century, God bless their little souls.


You missed the point, so I'll repeat it. The parent I was responding to made the comment that python has type hinting, responding to OP's criticism of Python's lack of static typing. My point was, there was no notion of type hinting in python until PEP 484 or whenever it was released, long after the initial release of python. Python was released in 1991 and type hinting was introduced in like 2014. So for the 23 years the language existed prior to type hinting, people made all sorts of arguments/excuses for why that's not needed/unnecessary for Python. Hell, even now as a consequence of this type hinting in python isn't all that common. It may have to do with the fact that for 85% of the language's lifespan, there was no type hinting and duck typing was rampant, which like I said is an approach that goes against type hinting.

So if you want people to be pythonic in py3, they have to make massive changes like replacing duck types with abstract classes.

I think it's weird that you don't question why vast majority of python code has no type hints. Do you have experience in Python?


It makes sense now, thank you. I think the word "pythonic" was misused in the parent - usually it's normative, but type hints are strictly optional.

I don't think I've seen anyone argue that Python 3 code without type hints is unpythonic. Except maybe the parent, but I think that's just unfortunate wording. It's explicitly using a non-standard definition of pythonic ("part of the language").


You know that protocols and static duck typing are possible, right?

They're not at odds at all.


Not telling you what you have done or not done. But this is the internet, you need to provide proof in order for your statements to be taken seriously. I am not telling you to do anything, by all means don't provide any proof.


> But this is the internet

I find that on the internet, after proofs are given, more disagreement ensues. When one mind's is set, is set in stone. Proof or without proof.


Perhaps there should be apostasy punishments for Python defectors. /s

Python's typing story is far from perfect and rather annoying compared to other languages. Python is good for interacting with the operating system, networking and other things.

But wanting a proper compiler is an honest motivation.


There are type checkers now.

Personally I find type systems a lot more annoying because they lead to more verbose code and more cognitive load. There are studies that show bugs being proportional to the number of lines of code, regardless of language. And that effect seems to be more significant than the difference among dynamic and static type systems.

Defensive programming can be achieved through various means. Type checking is one of them and often can't replace Unit and Integration testing. Validation of Inputs or other data is another issue.


A proper type system offers a level of safety and correctness that simply cannot be achieved without one, especially as the complexity of an application grows. Also, the supposed lack of cognitive overhead without a type system is actually a subtle form of technical debt where you move faster because you eschew thinking deeply about your data structures up front, but inevitably pay a bigger price in production when you have to debug data type errors at run-time. The debt has to be paid again by every new developer tasked with working on the project because they have to organically absorb an understanding of the application's data structures that could have been explicitly defined and enforced by the computer. Finally, the extent of "defensive programming" required to begin to make up for a type system actually adds a ton of extra code itself (e.g. tons of guard blocks testing brittle application values at run time). Not all type systems are created equally, but a code base without types is pretty much always worse than a comparable one with types (the exception being shell scripts and other relatively simple programs that aren't maintained by a team of developers)


> Defensive programming can be achieved through various means. Type checking is one of them and often can't replace Unit and Integration testing.

Of course they won’t replace unit or integration tests, but they will replace a lot of damn trivial tests you have to write to make sure you exercise all paths of the program, because there are no types to help you.

Also, at least when I think of defensive programming, I don’t think of how Haskell et al. allow me to more clearly express my intents in the type system. I think of the times where I write JS/TS and cannot rely on any data anywhere being in the format it says it’s in.


> Personally I find type systems a lot more annoying because they lead to more verbose code and more cognitive load.

Have you forgotten the cognitive load of "what does this return and what can I call or access on it?"

> There are studies that show bugs being proportional to the number of lines of code, regardless of language

But that value isn't the same across all languages, only the trend.


No, that was explicitly true across languages. That the number of bugs increases with code length within a particular language is a trivial fact.


I don’t think Haskell or OCaml are more verbose than Python.

Other traditional languages are adopting type inference, etc. We could be near an inflection point where we get statically type languages that are as convenient as Python.


> I don’t think Haskell or OCaml are more verbose than Python.

I love Haskell computing model, but holy hell, why is so difficult using things like State Monad or even a trivial thing like a global counter [1] ? Why something so simple should be so difficult? Don't say it's because functional programming, Scheme/Racket is not that difficult!

Haskell and OCaml has their own problems, I agreed with you that both languages are more expressive but the code density is not necessary a good thing neither.

[1]: https://github.com/scotty-web/scotty/blob/master/examples/gl...


A global counter is difficult because shared global mutable state should be difficult —- it can wreak havoc on your code. That said, if you’re really set on it it’s not particularly bad

    counter :: IORef Int
    counter = unsafePerformIO (newIORef 0)

    usesGlobalState :: IO ()
    usesGlobalState = do
      count <- readIORef counter
      putStrLn (“the counter was “ ++ show count)
      modifyIORef (+1) counter
It’s even easier in ocaml, where you’d use ref and not need to worry about the IO monad.

Anecdotally, I find it incredibly rare that such a thing is desirable, when you could instead create it in an IO monad and pass it through to functions that need it. But, it’s there when necessary. And making this kind of anti pattern hard is a strength of these languages, not a weakness.


Oh sure, but Haskell and OCaml are much less popular than Python, with consequences for community building, recruiting, reusing existing code bases etc.

The popular statically typed languages which are actually Python's competitors are for example C++, C#, Java and the like.

Sure as hell they are more verbose. Even Typescript was extremely intimidating to me when I started to try and understand Typescript code bases.

And I am NOT going to try and teach Haskell or Ocaml as a first programming language...


If one were to write perfectly type-annotated Python, and a hypothetical perfectly annotated standard library existed, what’s preventing a faster runtime being developed with performance similar to C#, for example? That will never happen, so maybe “what if” isn’t even worth asking


It's already being done: https://github.com/mypyc/mypyc


Yet the entire data science world is built on Python. I don't understand this blatant disregard of reality.


Python is ideally suited for data science and machine learning because 80% of the work tends to be data exploration, transformations and feature engineering and 19% of the work is iterative development of models. In such workflows the lack of type checking & simple REPL is a huge feature. Most of the underlying code for ML is anyway written in Fortran or C and Python becomes a very convenient front end glue

For the 1% work that is actually engineering the model's deployment in production, it's not uncommon to use faster languages with the exported model.

If what you're doing is writing a vanilla CRUD server that gets thousands or hundreds of thousands of hits a second, Python is probably a bad choice.


I’m not sure how many people will ever even work on CRUD apps of that scale.

Even a mere 1,000 req/sec adds up to 86.4M per day. That sounds like something at least as popular as Stack Overflow, for example. How many web apps are there, really, that have millions of users?


Or almost any advertisement tracker server.


Isn't that market dominated by a few, or at the max, no more than 10-15 companies? My point is just that are there really more than a few dozen or maybe a hundred companies that see more than 1000 req/sec (on web apps)?


Depends if you're talking maximum or average. Getting to the front page of HN could easily give you a burst of 1000 req/sec. I would agree there's very few CRUD apps that deal with 1000 req/sec on average.

On the other hands, there's various APIs and tracking type of services that probably do handle well in excess of that, being run by not-large companies none of us have ever heard of.


Uh why ? Thousands of hits per second will generally be IO or DB bound and will be done using clear code and a minimum of boilerplate.


I think Python is at a weird place regarding data science.

If you need easy-to-use tools, R libraries cover significantly more ground than Python. You don't have to worry about 2/3 compatibility of niche libraries. Plus the functional language forces the libraries to have similar API's. You also get to use tidyverse and ggplot2 in all of their majestic beauty. At most you have to write a "%>% do() %>%" to get around weird library API choices.

However if you are in the frontier, you could avoid so much awkward notation with an array-first language like Matlab or Julia. You have "^" instead of "np.linalg.matrix_power", "/","\" instead of "np.linalg.lstsq", the . operator and so much more life-saving tools.


R has so many problems itself! Attempting to run it at any sort of scale is basically impossible. Its dependency management is atrocious. You can't thread or multi-process effectively. The amount of overhead to even install R on a system is incredible, and amazingly its documentation is even worse than Python. We use R as our foundational Data science language and it has caused nothing but immense pain for everyone attempting to wrap the packages in scalable software.


None of the libraries that do the computations are written in python. None of the GUIs or matplotlib are written in python. If you want to build large projects or need performance, python is not the right language. I do like python a lot - for the things it's good at.


no, the trivial frontend is built in python; the real code is usually c++ or c.

Ignore that reality if you want to, but it is a fact.

Big complicated python projects are seldom pure python, they are usually a friendly python frontend to a serious application written in something else.

It seems in no way remarkable that someone wanting to build a serious backend type piece of functionality would pick another language that was, just for example, multithreaded.


The CPython interpreter itself is a C program. Acting like extension modules “aren’t Python” is highly disingenuous.


Oh please, go read the source code for tensorflow and then come back and we can have a real conversation.


I don't understand this logic. The users are learning Python, not C++ when they're trying to learn data science or implement a machine learning model. Should I say Tensor flow isn't written in C++ but CUDA or OpenCL? Any self respecting researcher is training their models on GPU or FPGA, not CPU.

The point is that Python is the entry point for large majority of data scientists currently and its absolutely disingenuous to try to dispute that reality.


> The users are learning Python, not C++

The key word here is "users". Python is fantastic for users. It allows users to get things done without having to worry about types or memory or the underlying hardware in any way.


I’d say it’s very disingenuous to claim that Python users don’t have to care about types. Many types are directly related to semantics, so of course they have to care about types!

I think Python made some really good choices with their types from UX that lead to you having to care more about your logic than the machine. Only one integer type, for example! For most users it’s perfect UX.

However, I think (possibly due to the early time when a Python was made) some of the decisions made practical but regrettable trade offs. Duck typing is ultimately just type inference with strong performance penalties and really weak static tooling, for example. _Many_ static type systems are still very far from the user’s actual domain, but the core techniques don’t have to be.


Not just data science: the entire machine learning and scientific computing world, if we’re honest. Need to do huge numerical linear algebra operations very fast with great memory safety and ease of use? Python.


Technically, you'd use a C/C++ library with a Python wrapper.


Yes, that is called Python.


No it isn't? The substantive portions of the software aren't written in Python. Python is a thin interface to the actual library. Users of the library who need to rapidly prototype can leverage the library using its Python frontend, but the authors of the library did not write the library in Python.


To program in CPython is to use thin wrappers around C-level implemented extension types and modules. For example, all major built-in container types.

If you program in Python for performance-critical applications, as I do, then you will frequently hand-write C-level extension modules, write extension modules in Cython, use JIT tools like numba.

All of this stuff “is Python.”


even for the crud apps so frequently referred to in this thread the equivalent 'real code' is the sql rdbms written in c or c++ no matter what the webapp language is. i don't advocate using python in million line monoliths despite it being my favorite lang in general at the moment but the volume of cognitive dissonance and false dichotomies in this thread is crushing.


Wrong: the data science world is built on R, which was designed from the ground up to be the freeware alternative to SAS and Mathematica.


Oddly in biology I’m seeing an uptick in R use as it’s easier to install and run tools in R for non programmers. The python 2/3 confusion (why doesn’t this google result work?), and various install packages (conda/pip...) seem to have made R more popular. This is despite biopython which is quite good. The newest single sell rna seq analysis tools are in R.

R is a strang thing to me, though it has its moments (graphing). I still think pandas which gives python R like data frames is one of the great tools out there.


pandas gives base R a run for its money, but I find the multi-indexing really confusing, and R tidyverse is just way way nicer than pandas.

The sentiment that R is easier than python to install/manage is common among some R users but I disagree. With R I'm constantly facing dependency hell problems -- one package wants an old version of R, while another needs the newest. Conda/venv solves this problem very nicely in python.

Recently lI've been using Julia more and really like it. One nonstandard case where I've found it really shines is parsing large bioinformatics data, such as pileup files. Python is just so slowww here, but neither do I want to write a C program to do the text parsing. Julia is perfect in this case.


Wrestling with borrow checker is enough punishment already. /s


See the details here about his project that he maintained in Python for years (since 2006) in Python:

https://blog.liw.fi/posts/2017/08/13/retiring_obnam/

I can really understand his:

"Obnam has not turned out well, from a maintainability point of view. It seems that every time I try to fix something, I break something else. Usually what breaks is speed or memory use: Obnam gets slower or starts using even more memory."

Also from his current post:

"I could perhaps have been more diligent in how I used Python, and more careful in how I structured my code, but that's my point: a language like Python requires so much self-discipline that at some point it gets too much."


These two stuck out as well. I wonder if the issue is with their coding practices and not necessarily a fault with the language.

If the issue is one of fundamentals it will only follow them to the next project or language


Give me two lines of an innocent man's hand...

Really, changing a Python project is not pretty, because speed and memory usage guarantees vary wildly between releases. One factor is that CPython is constantly being rewritten.

And that does not even cover projects that depend on 20 PyPI packages, which introduces practically guaranteed breakage.


> One factor is that CPython is constantly being rewritten.

Exactly. I have an impression that the programmers as the whole already spent significant time only to develop and maintain different solutions only to manage different Python versions and dependencies in production environments, which were maybe much less needed exactly with some better "discipline" in the Python development itself.

On another side, Python won many hearts over Perl 5 which is much more stable, but "appears to be" harder. Interestingly, Perl actually forces a programmer to "care more" (i.e. be more precise): the reference to something requires different notation than the direct use of something, the use of the array requires different notation than the use of the scalar etc... it actually has "some kind" of typing enforced by the language and explicit in every line. In my personal experience, I have much more "trust" in a 'biggish' Perl program than in a that big Python program to behave "exactly how I'd expect it" (e.g. in the sense that Perl doesn't produce as easy "exceptions" and whoever wrote it had to think more about the correctness than for Python, especially if "use strict" was used, which has stronger guarantees about the typos in variable names than Python). Going further, however, C-like (actually Algol-like) compiled languages simply allow much more than the scripting languages, regarding the smallness of what is eventually produced. E.g. Busybox ( https://busybox.net/ ) has to be written in C or something very close to C. Free Pascal http://wiki.freepascal.org/Platform_list is actually also very practical language for many use-cases.


Are you making that comment by speculating or by having looked at the Obnam codebase? I'm not a Python developer, but I've looked at the Obnam code enough to have written a replacement crypto plugin for it. AFAICT, if Obnam was holding Python wrong, it's too hard to hold it right.


I haven’t looked at the code, but I hope someone who has been using Python for 20+ years knows how to write and use it properly... so yeah, I’m inclined to agree with you.


Languages encourage various ways or styles of solving problems, though, and some of those ways require more self-discipline or care than others. For example, many languages with static types make it so that you can make sweeping changes to the interfaces between things and the compiler will catch any mismatches for you, but in a language like Python, you'd better hope you have a unit test that will catch it.

I like Python well enough, have been using it since mid-2001 and was very much a hardcore Python enthusiast for a number of years, but nowadays I prefer languages that encourage immutability and functional programming styles, as I find that they make it much easier to avoid bugs. I've had plenty of Python experiences where bugs were caused because something somewhere else was mutating stuff behind my back, messing up my mostly-unrelated code. Usually this other code was also written by someone else, so I had no way of knowing about it. This kind of thing can be prevented in other languages, but requires a lot of discipline to avoid in Python (and other languages, I'm only picking on Python here since that's the topic of the article). These problems won't follow you to the next language, if the next language is more suited to avoid them.


Even on larger code bases I've found that the only real discipline needed is maintaining a test suite, and that's true for every language.



You don't need to convince anyone, including yourself, that python's well-known limitations couldn't possibly have inspired this guy to look at other languages.


I was a year long Python user when I stared learning Rust. Reason: I knew just enough C/C++ that I knew I wouldn’t be able to pull through learning it properly from a perspective of time. But I wanted a strongly typed fast alternative to Python without garbage collection.

I had read some stuff about Rust and it seemed like a interesting language, mature enough for my applications. I tried it out and came to like it more than I would’ve expected. In fact I tend to use it over python for many things nowadays. One of these things is definitly how the module system works, but I agree with you: if you fuck up in Python with your code structure, the problem is definitly between chair and desk.

However: Rust won’t let you get away for cheap if you don’t think about structure, and this could certainly help some to adopt better practise (which in the end is good for all of us).

Interestingly enough my appreciation for and interest in languages like C and C++ grew a lot, after I had learned Rust. It gave me a good new perspective on C++ and certainly learnd me ton of good patterns and concepts that will definitly end up beeing useful in other languages. Rust certainly is well thought out and good ideas are never bad to look at.


May I ask what your use case was that required a no-GC language?


> Here it’s a classic case of “I got bored and I don’t like the new features, want new shiny”, wrapped in a bit of whining

This seems to be a rather shallow reading of the blogpost, and the way you phrased your reaction is not exactly polite either. If anything, the "adoption rates" of Python (like those of ECMAscript, a language of a similar vintage all-things-considered) are most easily explained by non-technical factors, so I have quite a bit of trouble making sense of your comment.


There was absolutely nothing impolite in the way he phrased his response. Rather, what I'm responding to is passive-aggressive.


Wanting new shiny is a helpful healthy way to keep exploring and growing.

I've worked with the opposite who absolutely refuse to ever learn python/go/whatever, producing these awful thousand line bash scripts.


Type checking as an add-on is okay but it is not the same thing as a statically typed language.


There is no proof or study that statically typed languages are safer in any regard than others.


Wrong. http://ttendency.cs.ucl.ac.uk/projects/type_study/documents/...

"our central finding is that both static type systems find an important percentage of public bugs: both Flow 0.30 and TypeScript 2.0 successfully detect 15%!"

In fact there have been more than 1 study with the same conclusion.


I didn’t say anything about safety. I find that it makes writing code and refactoring and collaborating a lot easier.


There was a paper linked here on HN a week or two ago that proved a number of safety guarantees using something similar to Rust. Sometimes there is truth behind the hype.


True. However, there is fairly strong evidence that it helps with understanding code.


Type hints throughout the code give you exactly the same understanding.


Only if they are correct. I am pretty sure that in old code base it will not be correct unless automatically checked before commit.


That's trivially settled with a static type checker.

So the argument is analogous to saying "how can I trust the types in this Java program are correct, if it has never been compiled".

Well, whether some "old code base" has them correct or not, just run the type checker and find out.


If you do that you will probably hurt your productivity. There still will possible errors made by using 3rd party code.

In my opinion optional type checking is not comparable to statically typed language guaranties. So if I do not need types I would use python. If I need/want type specification I would use statically typed language.


Please cite that evidence.

To me it sounds like circular reasoning, because in my subjective perception, dynamically typed code is a lot easier to understand to begin with. Also, tooling has become a lot better, with and without typecheckers/annotations.


For example: Do Static Type Systems Improve the Maintainability of Software Systems? An Empirical Study

"Despite their importance, whether static type systems influence human software devel- opment capabilities remains an open question. One frequently mentioned argument for static type systems is that they improve the maintainability of software systems—an often used claim for which there is no empirical evidence. This paper describes an experiment which tests whether static type systems improve the maintainability of software systems. The results show rigorous empirical evidence that static type are indeed beneficial to these activities, except for fixing semantic errors."

https://pleiad.cl/papers/2012/kleinschmagerAl-icpc2012.pdf

Follow on:

An empirical study on the impact of static typing on software maintainability

"We further conduct an exploratory analysis of the data in order to understand possible reasons for the effect of type systems on the three kinds of tasks used in this experiment. From the exploratory analysis, we conclude that developers using a dynamic type system tend to look at different files more frequently when doing programming tasks—which is a potential reason for the observed differences in time."

https://link.springer.com/article/10.1007/s10664-013-9289-1

Note that I wrote "evidence", not "proof". Also, there is no robust evidence for safety improvements.

Are there potential confounders? You bet. Are there other aspects that can help? You bet.

I do find that types help me with older code, despite the fact that my preferred languages (Objective-C, Smalltalk), both have keyword syntax that helps tag arguments even if there's no types or documentation. Particularly protocols help me define the interactions that I expect in my system, which otherwise tend to be only defined implicitly through the objects. I'd prefer real connectors.

When I try to understand Smalltalk code, that's a lot of browsers open and digging etc. When I look at some of my older Objective-C frameworks, all the id types do leave me with a bit of head-scratching. A lot of the times the types are trivial to add, and just make things obvious at a glance.

However, I also totally agree that these languages help me write cleaner code, and that enforced type systems get in the way of useful architectural abstractions. Sometimes I need that id.

¯\_(ツ)_/¯


I feel exactly the same way with all the Rubyists moving to Elixir.


I'm currently migrating a similarly sized Python and Django application to Go but for different reasons.

For me it's bit rot. My Python is approaching EOL on this version and needs upgrading to Python 3, and I have a lot of 3rd party code in here that made things simpler to create but over time they've changed enough or disappeared that I now have to take on work there too. Then Django and it's internals have changed a lot in the last 6 years.

My Go code written at the same time (6+ years ago) hasn't had any of it's libraries change so much that I've needed to do anything. Every now and then I pull the next major version of Go and recompile and it's done.

This is all side project stuff but a lot of people use it (monthly average visitors being +250k)... it's non-profit and I don't wish to have to maintain it much.

It worked once, if nothing has changed in the functionality it should keep working... but this isn't as true in non-compiled languages where you need to depend on so much of the environment, let alone build culture that doesn't fully vendor dependencies.

I like Python, but prefer to write code and move forward with other projects.

As to Rust vs Go. I code in both and only picked Go here because a lot of the code in another project I worked on can be used here. I'd probably try Rust if I was starting from scratch now. But I'd be a little concerned bit rot would affect Rust too, it's still quite early for the wider ecosystem.


I'd chose Python 3 migration and updating/removing/exchanging dependencies over rewriting in a different language I know less well any time.

And good look finding replacements for Python/Django packages in Go or Rust.

Don't get me wrong. You'll find a lot of libraries for those languages. But probably not everything...


> I'd chose Python 3 migration and updating/removing/exchanging dependencies over rewriting in a different language

They explicitly said they don't want to rewrite. It looks like they have some existing projects in Go that work in a self-contained way and upgrade with minimal effort.


Thanks to this post, I went back to see if my oldest large Go project was working: https://asciinema.org/a/cJosP51z0oScKPLmVUhyNKUOj

9 years later, still works.


If it is stability you are after, choosing Java on JVM will get you there much better. Java written 20 ago still compiles and works fine today.


What you're actually saying here is "Using Go I'm going to leave the application with unmaintained versions of its libraries", which is a bit of an alarm-bells statement from a security point of view.

It's perfectly possible to build a python application that is extremely reliably tied to its dependencies, you just have to avoid the default python package "management" solutions that get shoved down everyone's throat (pip, virtualenv, pipfile, tox and the like). I've been building python projects on top of Nix for years and have managed to maintain stable and reliable results.


No, it's really more about the ecosystem and tooling being very stable. Code you've written 6 years ago will most probably work with the current version of Go + updated libraries.


“It’s easy to do it if you build your own dependency management system” doesn’t sound like it’s easy to do it.


Nix is not "your own dependency management system"


Not using Nix but there have been a few libs I've "pinned" in our remote agent by the expediency of simply including the package source wholesale in our app root. Currently just an old version of PySNMP, but there were others before recent upgrades to cx_Freeze. No doubt there were other approaches, but this one is foolproof.


Having used all three of those languages, I've commented on this before.

Go is mediocre as a a language, but it has one big edge - if you're doing web backend stuff, the libraries for that are the ones used internally by Google. So they're being executed billions of times per second on Google servers and tend to be well behaved. Python tends to have multiple libraries for basic functions like the database interface, all with different bugs. If you write web backend stuff in Go, it will probably work.

Having suffered through the Python 2 to 3 transition, I'm less of a Python fan than I used to be. At some point, they need to stop adding features. Currently, Python is acquiring unchecked "type hints", which look like a terrible idea. Optional typing, sure, but unchecked type hints? If the compiler enforced them, they might be worth something. (And no, running a separate type checking tool is not the answer. Especially since no such tool exists that's production grade and in sync with the language.)

Rust has no idea when to stop adding features. Rust started as an imperative language and then morphed into a semi-functional language. Mostly to get a sane error handling model. The borrow checker was a brilliant innovation, though. I used Rust for a while, but quit about two years ago in disgust. I never want to hear "that only compiles in nightly" again.


> Rust has no idea when to stop adding features. > I used Rust for a while, but quit about two years ago in disgust. I never want to hear "that only compiles in nightly" again.

I don't understand how you can reconcile these beliefs. They stabilized too many features, but you also want more nightly-only features to be stabilized?

It's my experience as a Rust programmer since pre-1.0 that very few libraries require nightly; they tend to be niche things (e.g. Rocket) by authors that prefer to experiment on the bleeding edge of the language over having stability or many users.


I suspect what OP is referring to is the fact that it's just hard to avoid Nightly entirely, even if you don't need the new features.

All it takes is that one library you want to use requires Nightly. Since Rust adds features all the time, it's not uncommon for library authors to start using these features before they're in Stable. It's just the nature of a young language with an active community who likes being on the bleeding edge.

This also extends to the toolchain; it's not that long ago that RLS and Rustfmt were still Nightly-only, for example.

The presence of Nightly alone gives you a choice that usually doesn't exist with other languages. Not too long ago I went down the rabbit hole with some text stream stuff (Unicode is surprisingly poorly supported in places), and ended up with a temporary workaround because the std::io::Chars API wasn't in Stable yet and I just didn't want to deal with Nightly. But it was tempting. With other languages such as Go, the choice generally isn't there, even though you could theoretically download HEAD and go from there.


It's not hard to avoid nightly at all. I've been targeting stable Rust in all my applications and libraries since Rust 1.0 was released. The only exception is adding a new nightly-only feature, but always gated behind an Cargo feature.

This is a classic case of our standards of success continually moving, and that some domains are better suited to stable Rust than others. For example, if you want ergonomic async/await, then you need to use nightly. But otherwise, the number of crates needing nightly has shrunk enormously over the last few years.

> it's not uncommon for library authors to start using these features before they're in Stable

I would say this is actually pretty uncommon.


> So they're being executed billions of times per second on Google servers

So are Java and C++ programs at Google. golang is very mediocre for webdev, if anything, it can be used for small network tool kind of stuff. For anything that needs to be more complex, Java would be a better fit (e.g. golang doesn't have annotations that can be used for validations or automatic mapping between different internal and external types, which come in very handy and prevent bugs).


>And no, running a separate type checking tool is not the answer. Especially since no such tool exists that's production grade and in sync with the language.

What does "production grade" mean w/r/t such a tool?


Strongly agree with author. I really like Python and at one point thought that it makes sense to write almost any project in it, since it's so versatile and I know it pretty well.

After working on a quite large application and having to use lots of assert(isinstance(arg, type) at the beginning of almost every function, I began to think that a strong type system is very much needed for large projects. I believe this was one of the reasons why Twitter moved from Ruby to Scala.

I still love to use Python for hacking something quick for myself. But I also look at some popular strongly-typed languages now and hope to get better at one of them soon.


> I began to think that a strong type system

You’re confusing static/dynamic and strong/weak. Python is a strongly typed language, but also is dynamicly typed language.

Need proof? Try doing this:

a = 3

b = “3”

c = a + b

> having to use lots of assert(isinstance(arg, type))

Python has type hinting now, try using a current version of Python and this isn’t needed.


I think that example shows that Python's type system isn't the weakest out there, but it's still pretty weak.

For example, it doesn't have a built-in way to make a dictionary-whose-keys-must-be-integers (so with your example above if you wrote thedictionary.get(b) rather than thedictionary.get(a) you'd get an error rather than None).

I've often seen learners have trouble with this sort of thing (you read a number from a file, neglect to call int(), and get mysterious lookup failures).


> so with your example above if you wrote thedictionary.get(b) rather than thedictionary.get(a) you'd get an error rather than None

Technically, dict.get() returns None if the item isn't found and you don't provide a different value for its default.

thedictionary[b] would throw a KeyError on the other hand.

One could always subclass dict if they wanted to ensure the keys were always integers or whatever.

> I've often seen learners have trouble with this sort of thing (you read a number from a file, neglect to call int(), and get mysterious lookup failures).

Not sure if you're arguing for a stronger type system or having the language promote the string to an int as (I've heard) other scripting languages do.

Though, once you figure out the language doesn't automagically parse files for you you're well on the way to finding more fun and exciting ways to get python to throw error messages at you.


strong type system != strongly typed.

Type hinting does not guarantee to catch everything, but for Python proponents 95% is apparently "good enough".


I am using the current Python version and tried type hinting. I don't see how it can help much, since types are not enforced;especially when interfacing with not-so-well-tested modules that don't use type hinting and can sometimes return values of unexpected type.

It's definitely a neat feature when writing an application from scratch that interfaces with few other modules or only the well-tested ones.


The point is that your example will not break anything until run time, unless some static analysis tools are deployed (which are much more limited compared to what a compiler can achieve).


> having to use lots of assert(isinstance(arg, type) at the beginning of almost every function

Python should not be written like this; assertions can be ignored at runtime: https://docs.python.org/3/using/cmdline.html#cmdoption-o

A much better solution is to use qualified try/except blocks to handle potential errors.


> assert(isinstance(arg, type))

The most irritating thing I've found is trying to write a function that accepts either a path or the raw contents of a file.

Python 3 makes this easy, you can check against string. Python 2 of course treats everything as a byte string.

In the end I just check the length of the input if the interpret identifies as python 2. If it's a short byte string, I assume it's a file name. Otherwise the minimum expected file size is almost certainly larger than the maximum path length.


Why would you ever do that? Why would you even want a function that accepts either a file name or file contents when both are strings? What's so hard about defining a new function?


One common case where this is attractive is where you have a bunch of "public" functions built on top of each other, with the higher ones passing that parameter on and not otherwise caring what it is:

  high_level_function(src, ...)
      ...
      mid_level_function(src, ...)
      ...

  mid_level_function(src, ...)
      low_level_function(src, ...)
      ...

  low_level_function(src, ...)
      ...
If you can make the low-level function generic one way or another, you avoid having to duplicate the higher-level functions.

Another case is where there are two or more parameters that you want to be generic in this sense: if your only option is duplicating the function you can end up having to define inconveniently many variants.


Yeah, that is convenient and I do that often. But I try to wrap the variables in different types depending on their source (e.g. whether it's a filename or file content) so that functions further down can disambiguate between variables explicitly.



Why not have clearer file, filename or data keyword arguments ? Or just file and call open or stringio if you have the file name or data ? Plenty of more explicit solutions.


Lots of public APIs support this behaviour, it's convenient for users. For example numpy.fromfile will accept a file object or a string.


You're confusing static typing with strong typing. Python is strongly-typed, but not statically typed.

Python's optional typing could also be used to get rid of a lot of your assertions.


You are confusing optional typing with optional static typechecking. Python has the latter, not the former. In other words, everything is typed in Python and it will always check types at runtime.


Thanks for The writeup, I am going through a similar process. I just retired last Friday (I am an old guy) from managing a machine learning team and everything was done in Python.

For personal projects (and past consulting work) I have been loving Lisp languages since the 1980s but many languages have so many of the benefits of Lisp (thinking of Haskell and Julia) and there is definitely a ‘lisp tax’ on deploying and shipping products.

Anyway, for writing new books, writing open source projects, and perhaps some consulting, I would like to cut back to a single language, instead of the 4 or 5 I use now.

EDIT: I did some Go Lang codelabs when I worked at Google and have played with Rust (principally spending some evenings with an open source blockchain project) and neither language will be my home base language.


If I retired today I'd try really hard to only use Elixir and OCaml.

Elixir is perfect for servers and fault-tolerance. Also scales quite well to several tens of machines before needing any special tooling.

For a huge percent of most projects today Elixir would be ideal.

OCaml I want to use more for its really concise and strict (but extensible) syntax. And mostly for its strong typing and really well-optimized compiled native code.

Congratulations on retirement, relax and tinker at your own pace. ^_^


Congrats on your retirement!

And enjoy your opportunity to work with whatever languages you like. The variety of tools available today is so much greater than it was in the 80s.


> Rust is developed by a community, and was started by Mozilla. Go development seems to be de facto controlled by Google, who originated the language. I'd rather bet my non-work future on a language that isn't controlled by a huge corporation, especially one of the main players in today's surveillance economy.

Anyone else agree with this view ? Programming languages should be choosen based on technical merits, rather than who is behind it. Is there a lesson from history that I am missing ?


"Programming languages should be choosen based on technical merits, rather than who is behind it."

Considering the motivations of the language owner is a valid concern. Many people, for example, jumped on to HHVM as a "faster PHP". Then it started branching away from PHP[1]. Predictable, if you considered Facebook's reasons for having it.

[1] https://hhvm.com/blog/2018/09/12/end-of-php-support-future-o...


What were Facebook’s reasons?


I don’t know anything about HHVM, but plausibly the goal of it is “help Facebook” rather than “be a faster PHP,” and people who came for a faster PHP and are not Facebook are let down by it.


Non-optional strong types is one example where it benefits Facebook, but likely isn't great for those hoping for a "faster PHP". The blog post makes it sound like HHVM will only support hack, and not PHP, sometime in the near future.


In all three cases, Python, Go and Rust, the language and ecosystem may be in the hands of an organization, but you can probably produce code productively in it for years even if the organization went under or chose unacceptable paths.

Go is quite unique in that it produces very independent binaries. This might make software written in it quite future-proof.

Personally I like Python's philosophy and the way the community is run. I like the trajectory. I understand the trade-offs. And I am sure as hell not going to teach Go or Rust as a first programming language to people who aren't computer science students. And even then...


>Go is quite unique in that it produces very independent binaries. This might make software written in it quite future-proof.

What do you mean by "it produces very independent binaries."? Do you mean that Go implements system calls itself, instead using corresponding wrappers from the OS's underlying C library?

I read something like that recently, but haven't looked into the point yet.


"Very independent" doesn't mean totally independent. In particular Go doesn't do shared libraries.


Cool, thanks.


I think a “rust with GC” could actually be a very good intro language. The borrow checker is more than beginners can handle though.


There is little point to introduce a GC in Rust. If you don't mind a GC, you have nicer languages out there.


From my perspective, of all the languages that I’ve tried (Java, C, C++, Javascript, Haskell, OCAML, Python, several lisps), PHP, Swift), Rust is the best even without the borrow checker, and the borrow checker is just a nice bonus. Admittedly, there are many languages that I haven’t tried, and I haven’t done significant work with all of those.


>Anyone else agree with this view ? Programming languages should be choosen based on technical merits, rather than who is behind it.

Absolutely not. Technical merits are only part of the considerations.

Big companies, for example, where the stakes are higher, never chose languages on their technical merits alone. The also look at the owner/stewardship of the language, and in many case, roll their own languages, to avoid being at their mercy.


I'd say, all things being equal, yes, choose languages on tech merit or suitability to task. But look at what's happening with Java and Oracle's handling of it. I think the Google vs Oracle lawsuit is still going through appeal. Ask Google if they wish in hindsight they'd built Android on something else.


Not saying you’re wrong, but Google re-implemented Java for Android, which is quite different from using Java alone.

So it’s not a completely fair comparison.


Of course the reimplementation is what got them into this lawsuit...


I think it's more accurate to say that not paying Oracle is what got them in court. Either way it shows that the steward of a language can be important.


> But look at what's happening with Java and Oracle's handling of it.

It got fully open sourced?


I think it’s a very important aspect of the language’s ecosystem health, as the steward has a big impact on it.

This can be both a blessing or a curse. Google seems to be on the good side, but I’ve seen plenty of languages which are frustrated by its steward.


The original author comes close to contradicting themselves - Python too is developed by a community, but one of the reasons for moving away is that they are "not getting the feeling that the Python community is going in a direction I want to follow". That can happen to communities just as well as it can to corporations.


There's a simple remedy to the problem of becoming dependent on a single entity wrt the programming language you choose: use only languages with "official" specs (by ISO/IEC or another respected org) and multiple implementations. That's C, C++, ECMAScript, shell, and a couple niche languages such as Ada, Fortran, and Prolog.


I believe Java also fits that list. The [Java Language Specification](https://docs.oracle.com/javase/specs/jls/se12/html/index.htm...) via the [Java Virtual Machine Specification](https://docs.oracle.com/javase/specs/jvms/se12/html/index.ht...) is implemented on the open-source OpenJDK project and many other free distributions exist, including from Amazon, Azul, IBM, Alibaba and others [1].

[1] https://en.wikipedia.org/wiki/List_of_Java_virtual_machines#...


Java is a bit special in this regard because of the Java TCK which has caused issues for e.g. Apache Harmony.


One should evaluate all relevant characteristics, both technical and non-technical.


Oracle and Java


The strengths of Go become apparent when working in teams and when deploying, not when reading blogs about the language before trying to write code in it.


I’m working in a team of 5 people who has never used go prior to 9 months ago and had no training in it and we’ve just been unbelievably productive in it. We came to it from a combination of python and javascript backgrounds and have no problems collaborating or jumping into each other’s code.

To me the main advantage of it is that the small number of primitives and simple syntax forces people to in general be very explicit about what you’re doing. The code almost doesn’t need comments.


> To me the main advantage of it is that the small number of primitives and simple syntax forces people to in general be very explicit about what you’re doing. The code almost doesn’t need comments.

That has little to do with 'simple syntax', and more with the language itself (and/or its community) setting clear standards as to what "idiomatic" code in the language should look like. And Rust is fairly good at that, albeit with the inherent limitation of having to support the additional complexity of ownership/lifetime management. (Go can do without this since it uses GC instead. Swift just uses ARC everywhere.)


Coming from dynamic languages like Python and Javascript, it's not hard to beat those. Compared to something like Java or C#? Yeah, golang trails way behind. That's why you keep seeing such claims, because many people come from a background where they weren't exposed to a strong ecosystem, so any minor improvement seems like a big deal.


Exactly this.

Rust is quite the opposite of Go in that regard - just like C or C++, it makes it really easy to write over-abstracted and clever code that is hard to understand, audit or review.

Rust is also an extremely powerful tool and a ray of hope for systems programming, but comparing it to Go is an apples and oranges comparison.


My experience of working with Go in teams is that it encourages copy-paste programming and "works on my machine" deployment problems.


How does it “encourage” the latter?


gopath, poor standardization of project layout, build processes and dependency management.

I've seen people copy protobuf definitions from repo to repo because they couldn't get them building consistently from a different directory, never mind issues with different versions of the go protoc plugin.


GOPATH is no longer needed after Go 1.11 and Go 1.12.


I'm a long time C++ developer, and recently took a job where I write a lot of Python in a large code base. Initially it seemed great, but I quickly realized how uncomfortable it is to have no safety net up front. The interesting thing is Python eliminates a lot of programming errors you may encounter in C++. But it introduces a whole different set of programming errors you would never see in C++. I don't feel comfortable using Python for any project where the code must be correct. I would use Rust every time for that.


Neither Rust nor C++ eliminate logic bugs (and no language can do that).

If you are writing code that must work, you should be either formally prove the system (if feasible) or have a huge test suite covering everything.


> Neither Rust nor C++ eliminate logic bugs (and no language can do that).

While true, it's a big benefit to eliminate classes of known bugs (e.g. memory safe languages eliminate undefined behaviors, and Rust eliminates data races). That allows the programmer to pay attention to other aspects of the code, since there's only so much energy one person can spend.


No. If a system MUST work, then there have to be no bugs at all. Removing a class is useless. You have to remove all bugs.

You are talking about systems that SHOULD work, but aren’t required to always work. Typically, safety-critical software is the latter, the rest of software is the former. But the latter are entirely different beasts, and is what we were discussing.

By the way, Rust does not eliminate all race conditions; and memory safe languages may still have undefined behavior unrelated to memory (e.g. Rust)


Not trying to dismiss this person's work in any way, but is there any reason why this blog post should be considered notable? They clearly say at the end that they've yet to write any real-world code in either Rust or Go, and so far have just been learning about either language - so how come that this shoots up to the top of HN? Is it just the magic keywords "Rust" and "Go" that do that?


I am a Rust fanboy, and I can't see anything worthwhile in this article. I joke that any love-story about Rust will get you to a front page of HN. Yeah, Rust is awesome, and many people love it, but it is a bit ridiculous.


I wonder a little about the source of the name "Rust". That seems like an odd choice for a language name - not that it matters. Rust - being associated with old metals - makes me think of something that's been left out and abandoned. Maybe because I'm from a rural part of the country where you see a lot of rusted metal sheds or abandoned structures. Go, on the other hand, makes perfect sense - "let's go!" implies that it is something that is made to get things done and also being the first two letters of Google.


It’s got multiple associations, the “well-worn tool” one is one of them. https://en.m.wikipedia.org/wiki/Rust_(fungus) is another.


At 15000 loc, if the author found it difficult to organize python code, then it's not python's fault.

I find it a pleasure to organize code in python.

What am I missing here?


Exactly. 15Kloc is just out of the 'this stuff is easy' zone and switching to Rust or Go will make that problem larger, not smaller (because you will need more code to achieve the same effect).


It’s not just about code organisation though. Both Rust and Go have fantastic compilers which catch a great many dumb but common user errors before you even get to the stage of testing.

Obviously this doesn’t replace the need for functional tests but writing those tests is as much prone to user error as writing the code itself. So having a compiler check for stupidity up front is always going to be a bonus.

Also the static type system helps a lot too. Having dynamic types can be awfully convenient at times and I completely relate to why some people like them but static types means your structures and classes effectively have written specifications defined in the code itself which the compiler verifies the rest of your program against. So if you’re accidentally trying to write text to a numeric type, your compiler will catch that.

In my experience the extra up front overhead (in terms of development time) of static typed AOT compiled languages end up saving you development time in the long run when working against larger projects.


It all depends on type of project timeline etc. I find the stupid errors in untyped languages trivial to catch. In debugging the hard errors are what takes long time to deal with. Those types of errors are rarely can be caught by compiler.

On other hand dynamic languages give you flexibility and speed when developing. In some cases that is worth a lot more then losses you suffer from them.

Having said that Rust is just terrible application language. It was built for a purpose use it for that purpose.

From typed perspective from what I have used recently Swift is probably my favourite.

Here comes the catch, libraries, community support all tramp the hell out of almost any language feature.

There is a reason we write metric ton of C++ for everything and basically everything Python has C++ and C under the hood. C++ has stuff to do it all, it has been tested polished, tested some more documented and works. Now even Pytorch has first class C++ API.. It’s about time!

Java traumatized me as a young engineer fresh out of school. I can’t bring myself to even look at it these days. I am sure it is much better now. I just can’t do it.


>Having dynamic types can be awfully convenient at times and I completely relate to why some people like them but static types means your structures and classes effectively have written specifications defined in the code itself which the compiler verifies the rest of your program against. So if you’re accidentally trying to write text to a numeric type, your compiler will catch that.

In my experience, if I'm writing relatively decent tests and sprinkling of sanity checks these types of errors get caught quickly and are very obvious.

I can sympathize with somebody who has largely worked with untested statically typed projects and has moved to untested dynamically typed projects though - this is where the benefit of compiler type checking might seem to be most valuable.

I can also sympathize with somebody working with a dynamically typed language with a bad type system (cough javascript).

>In my experience the extra up front overhead (in terms of development time) of static typed AOT compiled languages end up saving you development time in the long run

Yeah, mine doesn't.


Like I said, tests are also subject to user error. I’ve seen scenarios where tests passed when they shouldn’t have because the test itself was wrong. Or situations when test coverage was good but a basic type check seemed so obvious it was forgotten to be added. Some of those tests wouldn’t even have been needed in Rust or Go.

I do agree that tests are important; hence why I raised that point from the outset. But ultimately in Python you’re still having to add extra tests just to catch up with the default behaviour of Go and Rust. Which is the point I was raising. Go and Rust are better at catching stupid user error.

Clearly though you can write large projects in Python if you’re disciplined enough. Same could be said for C, BBC BASIC or even assembly. The question isn’t whether you can, it’s whether other languages makes it easier or not.


>But ultimately in Python you’re still having to add extra tests just to catch up with the default behaviour of Go and Rust.

You don't though. The tests you'd write which would catch these "stupid user errors" are basic ones which you'd be remiss not to write in any language.

The fact that frequently developers do choose not to write any tests at all on large code bases does mean that they start to rely heavily on compiler catching 'stupid user errors' but it doesn't have to be that way.

>Go and Rust are better at catching stupid user error.

...if you don't have a basic level of tests, yes.

(although rust does provide certain type level protection against, say, data race bugs which python doesn't).


You’re flip flopping all over the place with your descriptions of tests. Broadly speaking, There are two types of tests being discussed here. Those are positive tests and negative tests.

Positive being that the code does what you expect under normal operation. Those you’d obviously need in any language

Negative tests are, amongst other things, checking that your functions behave correctly when you put garbage in. If your compiler flat out refuses to send a string type to a function that is designed to accept an integer then you’re already covering a whole class of bugs without needing to write specific tests for it.


>You’re flip flopping all over the place with your descriptions of tests.

The straw man of my argument living in your head flip flopped. I did no such thing.

When I said "basic tests" I meant demonstrate the behavior under normal circumstances". If you're doing TDD on 60% of your code base you'll write enough of these.

>Negative tests are, amongst other things, checking that your functions behave correctly when you put garbage in.

This is a completely unsuitable use case for a test. If your function expects non-garbage it should raise an exception if it gets garbage with a sanity check.

I wrote one just a few hours ago like so:

    assert no_nans(dataframe)


I have no problems with exception per se but what you’ve now done is push your error checking into the runtime which means your application could crash at unexpected times. This might be fine for your typical use cases but it is a terrible way to go about building software if you’re writing headless services.

The point of functional tests is they catch developer errors before your codebase hits production. Which you cannot guarantee if your error checking is only in the runtime.

Disclaimer: I write services used by millions of people (literally) each day. Some of those services are written in Python.


[flagged]


> In theory but in theory any application could crash at an unexpected time

That’s a terribly unsatisfactory answer.

> In my experience on a half decent code base these sanity checks never trigger during production in normal code; they're designed to get triggered during tests. They're there to make it easy to track down the root cause of bugs.

There are a lot of conditionals attached to your statement. ;)

> The times when they did get triggered on production code I did it intentionally as a way to debug a train-wreck of an application in production.

If you hadnt relied on type checking in your runtime then you might not have needed to get to the stage where you were debugging something in production.

> I've been doing this for 10 years - on many projects that have had millions of users. You're not special.

I didn’t say I was special. I was just making a point that I have used Python on important services just in case I was coming off as a Python hater.

Btw ive Been doing this for 30 years - though most of that hasn’t been in Python but most of the last 20 years has been high availability systems. Which may explain why I’m less willing to push buggy code to production and rely on exceptions than you.


[flagged]


This comment crosses into personal attack, which is not allowed here. If you'd please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting to HN, we'd be grateful.


Weird how you've come to that conclusion when I've been the one talking about testing and you keep shrugging it off with "well I prefer exceptions" because "sometimes you have to debug things on production".

I honestly don't think you know the first thing about writing stable software so are now just lashing out at me.


This also crosses into personal attack, which is also not ok here. If you would also please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting to HN, we'd also be grateful.

Basically (and this goes for both of you as well as the rest of us), when you find yourself unable to keep from replying as irritation mounts, it's best to wrest oneself away from the internet for a bit.


I wasn't irritated. I just assumed that kind of tone was permitted because this place is full of people taking to each other like that.


Those are a only a subset of the possible negative tests, and the ones you shouldn't be writing (if you do, simply switch to another language).


> Those are a only a subset of the possible negative tests

I agree. I said this too

> and the ones you shouldn't be writing (if you do, simply switch to another language).

That was my point


Mypy and type hints does catch that.


Sure. There are additional tooling you can use in Rust and Go to catch a plethora of errors beyond what I described too. However if they require the user run them manually then they’re just as open to user error as the testing I also described. This is why having those checks in the compiler itself can be invaluable.


usually integrated into pycharm or vs code so you get the warnings and errors as you type/save


Running a static analyzer is just another case to add to the test suite. So...you add them to your build system and Bob's your uncle.


Again, the point is you need your engineers to diligent enough to do so. That isn’t always the case. Which is why I talk about checks being part of the standard language build tools.

Humans error. I’ve seen numerous occasions when stuff that should have been added into the tests wasn’t. Even from engineers who are normally thorough. I’ve also seen cases when competent engineers have written test cases that pass when they should have failed. Heck even I’ve done that and I’ve been writing software for 30 years.

The more a languages build tools can catch developer errors, the less you need to rely on developers remembering to check for them and thus the more time developers can focus on the project at hand.


This seems a more institutional issue than something python should take care of for you.

If your devs are pushing code into production that doesn't pass the tests (or they aren't even running the test suite) then you're asking for trouble no matter what language you use.

As much as I love reading 20+ pages of errors c++ likes to spit out when you use a template wrong if I want to do something simple I just reach for python.


> If your devs are pushing code into production that doesn't pass the tests (or they aren't even running the test suite) then you're asking for trouble no matter what language you use.

Clearly that’s not even remotely what I said. But I’ve posted the same thing twice already. I see no point posting it a 3rd time.


My understanding of your argument is it's too much of a burden for devs to run a static type checker manually so the language should force you to run it as part of the compilation process.

Sure, yeah, if you want to ensure type safety then use a type safe language...argument over.

...unless you want to do something like duck punch some function into someone else's library and you really don't want to rearchitech how the whole thing works because of that one special case which just doesn't quite work.


Yeah I agree dynamic languages have their place too. I’m actually extremely agnostic about languages.

That said, I’ve been on projects debugging code that has had functions overridden like that and it wasn’t fun. Sometimes people can get too clever for their own good. But you can have that problem regardless of the language


Statically typed languages are more robust, as far as reducing errors is concerned.However, with modern static type checking tooling in Python, for small to medium sized programs, pythons benefits far outweigh its shortcomings.


I’ve already addressed that point elsewhere in this thread


Yes. At 150kloc a novice Rust/Go programmer will surely have less trouble maintaining his code base!


That maybe your personal opinion is not an objective fact?


I have a genuine question. One of Rust's biggest plus points is having no GC and the related overhead at runtime. Go, on the other hand does use GC. Then how come so many successful Systems programs (e.g. Kubernetes) are written in Go. Anything I am missing here?


> how come so many successful Systems programs (e.g. Kubernetes) are written in Go

This is because what you call "systems" is not the "systems" that the lack of GC would benefit from. For all intents and purposes Kubernetes could be written in Python or any other general purpose programming language (and for example OpenStack is in fact written in Python).

On the other hand, systems like "writing an operating system kernel" or "running on a microcontroller" are things where GC might indeed be problematic and/or impractical.


Rust is still in research phase, as many programming patterns have no clear counterpart in Rust. Even though Rust's approach is very interesting, it might very well be that its strict typesystem turns out to be a dead-end for some existing programming patterns.

Also, writing self-referential data-structures is very difficult in Rust (but not impossible) due to the type system. I suspect that a lot of developers who just want to get work done get turned off by fighting with the typechecker, and choose GoLang instead.

And also: GoLang is a very accessible language, that can be mastered in a couple of days.


> Also, writing self-referential data-structures is very difficult in Rust (but not impossible) due to the type system.

Not really true (at least, not nearly to the same extent) since Pin<> was stabilized. I suppose that only serves to prove your point about Rust still being "in research phase", though!


Good to know, but I guess that Pin<> does not solve the problem of indirect self reference, such as when you have a tree where every node points back to its parent (e.g. for convenience or performance reasons), or a doubly linked list where each link has a forward and backward pointer (?)


Then does investment in learning Rust have any merit?


Yes. If your style of problem solving suits language features like pattern matching on sum types and type classes, then Rust would probably be a more productive language for you than C++, and it would be beneficial to learn it.


Many reasons to choose go over rust. First and foremost, Rust was not 1.0 and was still making breaking changes when Kubernetes came around.

Other things like the situation with async/coroutines is still it great in rust.

Go is, syntactically, a much simpler language than rust.

It all depends on what you need, what you want, and where then community around it is (e.g. Docker came first and was written in Go).


For one, because in many systems situations, having the garbage collector slowdown is okay. Also Kubernetes is probably a bad example since it’s a Google project and Google is of course going to use Go more than most companies would.


> Google is of course going to use Go more than most companies would.

You'd be surprised.


I’d expect at least higher usage in products that aren’t a business focus. Kubernetes is a good example of that since Google doesn’t actually use it.


> (e.g. Kubernetes)

Kubernetes may as well have been written in Python. It's mainly IO bound, so the benefits of compilation as opposed to running on an interpreted VM (like Python) aren't that big.


You are missing the obvious. Go has may successful projects because some people chose to write software in Go as opposed to just have opinion and debates on PL theories, GCs and runtimes.

Now I am not arguing one way or another. People can just have opinions or they can just write software or they can do both.


GC is one of those features that divide languages. It isn't a big limitation but it usually precludes hard real time performance. Rust caught the public's attention more recently than go. Both are still in their early days as far as adoption is concerned.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: