Hacker News new | past | comments | ask | show | jobs | submit login
Python 3.12.0 is to remove long-deprecated items (python.org)
264 points by BerislavLopac on Nov 16, 2022 | hide | past | favorite | 252 comments



I'm normally joyful when a modern language (say, julia) decides to break backward compatibility to improve the language on a fundamental level. This is mostly because I'm not too vested in it.

For languages where I have 10+ years of work behind, it's the exact opposite, and where I see the c/c++ model of not breaking backward compatibility a much saner choice.

Python in particular is an extremely bittersweet pill to swallow. The amount of small and large breakage I had ever since I started to use it for production work (v2.6 and onward) has been relentless. Small and large breakages requiring constant retooling and reworking. Minor release? Yeah, still breaks just as much as a major one. pyenv doesn't help when you dependencies need to be updated, and the updates do not support the lower versions you wanted to use anyway. Containerizing everything is a not a solution for a project is expected to be supported for years so the real way forward is to fix it, again, and again, and again. My experience is that every 6 months there _will_ be work just to fix bitrotting. A one year old python project will hardly even run unless it's using the stdlib only.

By comparison, I never experienced such churn with perl.


For a different perspective, I started using python about 5 years ago, and I never experienced a single breakage due to a new python release. Instead, I'm always excited to read new version releases notes and it's the only thing that may make me move away from debian stable someday. But waiting a few months, using containers, or compiling a newer python is usually fine when I can't wait.

Removing deprecated stuff is… the point of deprecating stuff?


> Removing deprecated stuff is… the point of deprecating stuff?

The point of deprecating stuff is to redirect to newer/better/saner APIs, not necessarily removing the thing.

This is where I like Java's approach. Stuff is deprecated in the JDK but fairly rarely is it removed. When it is, it's because the feature is either unused or so detrimental to the ecosystem as to warrant removal (see: finalizers).


My experience with Python is probably closer to nicoco's than wakeupcall's (usually pretty painless), but I tend to prefer the Java approach. Waiting three releases for removal gives you four years of support and seven years of security fixes. That isn't unreasonable but feels a little too fast in a language as old as Python (especially with people still burned from Python 3).


Same here. I don't think I've been affected by a single deprecation since upgrading to Python3 in 2015, and even that upgrade wasn't that difficult -- mainly I was just forced to fix a few things I had been doing incorrectly, which imho is a good thing.


You should also look into pyenv if you'd like to install newer versions sooner, outside a container.


I moved to nixpkgs from pyenv about a year ago, with positive results. I think it's worth the initial effort.


5 years is not enough to be experiencing this. The pain is very real and consumes a tremendous amount of effort. I've started writing tools just to create inventories of which packages with which versions I'm using where in order to try to automate the upgrading of virtual environments as much as possible. It's a horrible experience.

The biggest boulder I currently see rolling towards me is the MongoDB driver since I need to upgrade the databases and the drivers can't cope with the latest version.


We have recognised this problem and to that end we have produced a Stable API standard https://www.mongodb.com/docs/manual/reference/stable-api/

I realise this doesn't help you with historical API changes.

If you look for help in the MongoDB community and tag me (Joe.Drumgoole@mongodb.com) I will make sure you get help migrating your app.

https://www.mongodb.com/community/forums/


conda create -n yourenv python=3. 12 ;)


Other than the 2 to 3 transition, doesn't jive with my experience as a developer who's used Python in various projects for over 20 years (since the 1.5.2 days).

I've dealt with a lot of sideways yak-shaving work in my career and still frequently today and Python has been the absolute least of it.


Can you detail any of these "small and large breakages" or "bit rot"? It just doesn't jive with my experiences with any large code bases in nearly any language, python included.


Python 3.10 introduced asyncio changes that broke compatibility with earlier 3.x releases. I don't have the details at hand, but as I recall, some code that supported caller-supplied event loops not only broke, but could no longer be made compatible with both old 3.x and new 3.x python versions (unless separate code paths were maintained).

This was frustrating, because I don't expect point-releases to break the standard library, let alone require a code fork to maintain cross-distro compatibility. It was disappointing, because I had never encountered such breakage from python while Guido was still in charge.


I had the same experience. I was pretty annoyed to see that some server apps were unable to run after the upgrade because of some changes to the loop. "Why?", I was asking myself.


> By comparison, I never experienced such churn with perl.

I literally have 20+ years old perl scripts, usually doing one single thing (many of them) still working on new machines without issues.

I've rewritten some stuff from python(2) to perl (instead of python3) because i was unsure when a new rewrite for whatever reasons...

Now, more python stuff needs fixing, while perl still works.


Your kinda comparing simple basic api in perl vs likely more complicated python code here aren't you?

If you had a set of single simple python functions maybe they wouldn't have broke as much if at all?


> If you had a set of single simple python functions maybe they wouldn't have broke as much if at all?

Like (python 2):

    print "hello world!"
?

( https://docs.python.org/2/tutorial/introduction.html )


Python 3 came out 14 years ago, and changing the syntax of one function wasn’t even a big deal then. Literally a 10 minute code search to fix a project.

It might be time to move on…


It was literally months to fix every change in every project and to fix all the issues coming from that. ("print" wasn't the only change in 2->3)

Our codebase is was (and is) old, but the forced change came recently, when ubuntu decided to remove python 2 support (ubuntu 20.04?). And when ubuntu decides that they want python >=3.12 as the only version installed, this means another change and more work to fix stuff that worked just fine.


Somewhere Larry Wall is laughing (and/or crying.)


This is the top reason I chose Perl for my project.

Literally every time I try to use someone else's Python code from GitHub I run into this crap.

I wanted my project to be easy to install and run, so I chose Perl.


I have had Perl scripts which I've not used for 15 years, run them expecting to need to fix something, but nothing needed ...


The really nice part of sticking with python2 is that it is now a a stable interface. just like perl 5.

only half joking.


I would consider it, except it is no longer being installed most places, since it is "deprecated".


It's a real shame, since this is a project management issue. You'd expect python to be a stable language given its age. I expected that to be the case at 2.7. Then at 3.1. At 3.12 it's still not the case.

It's reasonable to expect this is not going to change.


It's gotten far, FAR worse since Guido stopped having as much guidance. The increase in mandatory updates at regular intervals is a recipe for disaster in most cases, as it is in python.

Also, to pile on top of the stdlib problems - it's probably clear to most everyone at this point - but NVIDIA and Google (tensorflow) are really some of the biggest reasons python sucks in this way. They're the ones that cause most of the breakage. Starting with Nvidia making breaking changes, which then propagates to tensorflow et al, and then to enormous number of packages that depend on these fundamental packages.

So to summarize, it's mostly Nvidia's fault.


I think with backwards compatibility it's "fool me once".

In order to make the most of Lindy Effect, I avoid any dependencies with less than 20 years of backwards compatibility.

For some that means a subset of features. For some like Python it means any python scripts must be optional and have Perl duplicates.


This doesn't make sense. You can pin versions and it will work forever. If you want to update you need to update your code.


A problem with pinning is that it doesn’t only freeze features, ensuring that the features you need will be available forever, but also bugs, in particular security bugs, ensuring that security bugs you have today, but that you and possibly even the entire world are unaware of, will stay around forever.


Not forever, eg very old versions of Python cannot install dependencies from PyPI anymore because SSL is stricter nowadays.


I mean that level of breakage is good. You can’t keep running outdated stuff while expecting to interact with the wider world.


Okay, then practically you can't pin versions and have it work forever.


PyPI is not required to run Python though. You could serve or source those packages elsewhere. Pinning the dependencies would at least resolve compatibility of the actual code.


You can. Package everything you need with the environment and you’re done. Now whether future versions of whatever container you choose to run it in will support establishing the environment is a separate detail. There is no answer. You can go all the way to reserving an entire computer to run mission critical code but you don’t know if it is future proof in terms of electrical specifications.

That’s the point of civilization though. You iterate and what works out, other people copy and iterate even more.


> You can go all the way to reserving an entire computer to run mission critical code but you don’t know if it is future proof in terms of electrical specifications.

See, that's a beautiful example, just not in the direction that I think you intend it. I can, in fact, take an appliance that's a few decades old and plug it into a modern outlet and it'll work just fine. I do not, in fact, have to preserve an entire electrical grid just to run my application^w appliance, because the underlying infrastructure maintains compatibility.


No it is an example exactly in the incorrect way that you took it. Just because electricity has been relatively constant in specification doesn’t mean it’ll always be the case. You can work with that assumption up to even a third or fourth order approximation. But on some level it’ll fail.

Which brings you to the larger generalization that you haven’t grasped: the further down the stack you go, the more stable it is. Python is nearest the top layer. Hardware changes in decade cycles. Electricity likely in century cycles. Keep going. The laws of physics change never. Obv the time scales aren’t strictly logarithmic. Human ingenuity means we could find some exceptional electricity specification tomorrow and convert the whole world in one decade. But it is relatively rarer.


Yah, nothing the GP says makes any sense. I've been working with large code bases with many languages for many years. None are perfect, but the GP makes no sense, sounds like poor decisions or poor code rather than a poor language.


The idea that pinning versions and using environments will make your code run stable forever is very wrong (in python). You can't even get things to run for a few years this way. Many of the packages are simply not available anymore. Ever tried to get an environment running that uses qt4 on py2.7? It really wasn't that long ago that py2.7 was standard (in the stable code realm that we're talking about).


If you want a long term snapshot of other people's packages, the onus is on you to store them indefinitely. Packages can be pinned and installed from a local folder.


> By comparison, I never experienced such churn with perl.

Ah, but that's because the Perl community rejected the one major attempt at such a change (Perl 6) so hard that it became its own separate language.


The only language I've experienced so much churn with is Swift (which loves to break things, vs. the relative stability of Objective-C.) But Apple has already outsourced yearly iOS compatibility updates onto every developer, and now they have even decided to remove apps that haven't been updated recently from the app store. A stable game/app development platform it is not.

I am still shocked though by the Python project deciding to break billions of lines of legacy code. What other platforms demonstrate that kind of contempt for their developers?

Removing the print statement seems like a particularly negative trade-off: breaking code in the name of syntactic purity, instead of simply deprecating the print statement and letting it live on compatibly with the print() function.

Since then a bunch of new syntax (not all of which is bad) has of course been added.

It's also weird that until now they seem to have largely ignored performance and focused on adding random new features.


I missing print without parentheses.


Me three, oops, meant two (extra parentheses to type). Not /s, but ;)


I’ve had that problem with Perl, too, so I think it’s more about which set of third-party packages you depend on and the norms in that space. One interesting tradeoff is that Python’s standard library is pretty large so you have a fair number of programs which can be frictionless by sticking to the stdlib.


Agree.

This is perfectly exemplified by Python 3's insistence that you call print with parenthesis:

>>> print "hello"

SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?

I empathize with the lang devs in that having two forms of print is nonoptimal, but the fact it tells you to do something different while fully understanding what you said (as it were) is what really irritates me. It comes off feeling needlessly pedantic for the language.


I don't believe that the problem was that "having two forms of print is nonoptimal"; I believe the point was instead to remove a parsing ambiguity, which in turn allowed a whole class of errors to be caught at load time that previously weren't; and allowed the syntax to be extended in other new ways that previously couldn't have been encoded as parse rules due to the ambiguity.

While the Python3 lexer is specifically hacked up to recognize 'print' as a distinct lexeme — and to thereby emit an additional parser meta-instruction lexeme that triggers a special error-handling path in the parser if you then go on to make a syntax error per the newer uniform syntax — it doesn't actually know what you were trying to print. (That'd require a successful parse!) If the Python3 parser tried to do the compatible thing, that'd require ditching the uniform syntax altogether, and going back to the ambiguous parser.

It's a bit like Error Correcting Codes — the uniform Python3 parser knows enough to know that you did something wrong, and is provided by the lexer enough context to guess what kind of failure it was; but it doesn't have enough information to "Do What I Mean", because that's a strictly-greater amount of information.


I disagree strongly with this take and it seems basically to be saying "better error messages are bad."

What if clang says "did you forget a ';'"? It would have been better to just compile the code as if there was a semicolon?


No need to disagree, that's just two different design patterns among programming languages. If stuff like that matters, you can always choose Ruby or something similar. It has its own shortcomings though and neither approach is ideal in every situation. That being said, the transition from Python 2 to 3 was the most horrible thing that ever happened to a popular language. Print for example was fine as a keyword and forcing it into a function after so many years was a terrible retroactive design choice.


Meh, the print change was fairly trivial. You could backport the print function in Python 2 codebases, and updating existing print statements to use the function was easy to automate. String changing from bytes to Unicode was a lot more painful, in my experience.


That was just the surface. Even the Python Foundation knew that their 2to3 converter tool just wasn't sufficient and hat to delay the discontinuation of Python 2 for half a decade, because there was so much unportable code out there. In the end we had 12 years where the two languages lived side by side and my first job using Python had us use 2.7 for new projects due to library concerns despite 3 being around for more than 7 years already.


Unfortunately, it also had the effect of breaking language tutorials going all the way up to "hello world". A tough experience for new learners, who are least able to debug those simple errors.

Back when Python 3 was first released, the error message was not so clear, either...


This is why I appreciate languages with a strict and non-strict mode. Let me make that choice. In particular, if the parser is intelligent enough to understand what I meant and is just throwing a syntax error to be pedantic, then let me control that. Needlessly taking choices away will always be a frustration.


The downsides of having both strict and non-strict mode:

- It is a maintenance burden on the compiler/interpreter writers to allow both modes

- There may be unexpected behavior when you use code from two different sources; one that expects strict mode and the other expects non-strict mode

- There may be interpersonal conflict when developers working together prefer one mode over another


Orthodoxy has never been my strong suit.

I have a pragmatic take on this if only because of Python's ease of throwing a script together and its non-technical userbase. Were it something that effects correctness I'd more fully support this break of backwards compat.


> I disagree strongly with this take and it seems basically to be saying "better error messages are bad."

I rather have something like this:

> Use exit() or Ctrl-D (i.e. EOF) to exit

> >>>

Less pedantic, less passive agressive. Doesn't fake being nice to the user.


Here’s a good overview of the details between the two: https://snarky.ca/why-print-became-a-function-in-python-3/

“Why don’t we just keep both?” probably has many answers. The overwhelming one for me is that if everyone’s doing the same thing two ways, then everyone has to learn all the esoterics about the print statement.

“Why don’t you just do it for me?” is a cardinal sin for a runtime. And already exists in migration tools.


Being able to use it in higher-order functions is somewhat un-Pythonic but I can accept that as a reasonable use for it. Would still prefer that print the statement is deprecated but usable.


I think composition is becoming increasingly more within pythonic idioms... it is well suited towards pythons usage as a high-level controller around low level libraries in other languages.

see also: the recent addition of match semantics in python


The problem is that the older syntax had way more ad-hoc oddities than just the lack of parentheses. Having to support weirdo statements like `print "hello", "world" >> sys.stderr` complicates the parser a lot and creates ambiguities, especially if you try to have it be both be this special-case statement and be a function object.

Is `print >> obj` a print call with redirection to the file-like `obj`? Or is it a operator call between those objects? What does `f = print` do, assign the function object or the result of calling print with no arguments?


If you type `func "arg"`, you get a generic syntax error.

If you type `print "arg"`, the syntax error is a bit more informative, but I wouldn't say that the compiler "understands" what you mean. It's just making a guess based on the fact that the previous token was "print".

On the other hand, supporting a special call syntax just for print, which includes more than just parentheses, would be substantially more complex. And then Python code in the wild would be more inconsistent and there will be endless debates about `print <args>` versus `print(<args>)`.


While it's been coming for a long time, it's still frustrating to see the cgi module go. It's going to create a load of busy work to rewrite some low-traffic scripts.

Modules being removed [edit: in 3.13]: https://docs.python.org/dev/whatsnew/3.12.html#pending-remov...

Updating CGI scripts will be a bit fiddly: https://peps.python.org/pep-0594/#cgi


There is CGIHandler that ships with python. Not many examples that are simple, so...

  #!/usr/bin/env python
  from wsgiref.handlers import CGIHandler

  def app(environ, start_response):
    start_response('200 OK', [('Content-Type', 'text/html')])
    return [
      b"<html><head><title>foo</title></head><body>bar</body></html>"
    ]

  if __name__ == '__main__':
    CGIHandler().run(app)


Thank you!


You linked to Python 3.13 removals.

Python 3.12 removals are here:

https://docs.python.org/dev/whatsnew/3.12.html#removed


Thanks. I did ctrl-f cgi as I knew its removal was coming and missed the heading. So one more point release of grace, but it's still coming soon.


Just copy out cgi.py and use it as a module.

https://github.com/python/cpython/blob/3.11/Lib/cgi.py


It's even vendored!


I see telnetlib is on the way out in 3.13 or so, which means a lot of pointless busywork in my future. Its a super useful module for a lot of tasks.

The suggested replacements either are asyncio based (which means whole ass rewrites as asyncio is really fucking opinionated), or are excessively restrictive in some stupid way.

"Infosec" at work tend to raise a ruckus when their scans detect "older" versions of Python on machines too, so virtualenv'ing to pin versions or similar is often not the most practical in prod without getting an exception.


I would expect them to move telnetlib to a separate, unsupported repo that you can then import.

Even if they don’t, from a cursory view, it seems it’s a pure python library (https://github.com/python/cpython/blob/main/Lib/telnetlib.py).

If so, can’t you copy the current code into your project? You would take on the burden of supporting it, but looking at https://github.com/python/cpython/commits/main/Lib/telnetlib..., that doesn’t seem to be much of a burden.


PEP-0594 lists the deprecated packages and any potential replacements.

https://peps.python.org/pep-0594/#telnetlib


You should be able to repackage it as a module.


That's what I'm thinking, someone could always throw it up on pypi


What do you use telnet for today? Almost everyone uses ssh. The reason behind that was to provide TLS. Is there a reason you can’t use ssh?


Telnet should not be used anymore. Its an old protocol, and not suitable for today especially in the case of security. You can easily migrate to ssh.


I have a 35k$ worth Xray flat panel detector (still have >5k$ ebay value today), it runs embedded linux on an obsolete samsung ARM processor and uses telnet to talk to the console computer through a point to point ethernet link.

Apparently, it's impossible to migrate to ssh and why the ** I need to care about security here?


Didn't talk about old devices. Security can be very important in cases of connecting to other servers.


> Didn't talk about old devices.

Does not matter, you stated without nuance that telnet should no be used. So you told GP that they should trash their telnet-only devices.


No. You understood the phrase incorrectly and assume I mean "bang the head against the wall and don't you telnet". Of course if you have no other choice, go for it. But if you do...


"Telnet... [is] an old protocol..."

Ok. Telnet is a protocol, it is not synonymous with TCP.

As for unencrypted traffic generally, I see no reason for fetishizing encryption. "Compute" is not synonymous with teh cloudz. Cloud tech is not synonymous with teh cloudz. It's trivial to put e.g. Nginx in front of something.

If you need encryption between 127.0.0.1 and 127.0.0.42 you clearly don't trust the hardware you're running on; I'm sure someone's got some clever way of hiding the private keys, but "clever" doesn't mean provably secure. Between containers... meh. Show me you thought about it and have a threat model.

Between datacenters... not so much. Do I care whether the connection is encrypted, or whether all traffic goes through a tunnel? I'd like to see a threat model. Is sigint included?

Even across the globe I might make an exception for a well-reasoned argument. That would almost certainly have to be data which was an observable: the time, a flow rate. Encryption is not the same as tamper-proof. Even unencrypted data might need defense against that. (Maybe secrecy is easier than nonrepudiability. Maybe.)

Secrecy is not identical to privacy. Security is not identical to either of those, and hypervigilance isn't the same as not needing to be concerned with something (these days security can be either of those). Observability might be more important than either secrecy or privacy depending on the application, and who is doing the observing.

A lot of it depends on where you demarc your rings of hell and how you defend things.

I send data in the clear sometimes and I'm ok with it. I also use a telnet client for decidedly "not telnet protocol" purposes related to diagnostics as well as observability.


I dont see why a thread model is needed here, encryption is always important, especially in big corps. It seems like you're stating that a normal person doesnt need security.


Not sure why everyone thinks that telnet is "not suitable for today especially in the case of security", although telnet over TLS exists. There are also certain things that telnet can do which SSH cannot (e.g. transport data for a block terminal).

And everyone who uses "telnet as a program for connecting to a server on any port and talking the protocol manually" forgot that telnet is a real protocol and this type of abuse only works if the telnet client is sufficiently dumb / doesn't negotiate parameters. There is a reason why netcat exists.


Telnet does not have any standard security, at least by default. No encryption and no authentication among others.


This is factually wrong. There are even multiple different authentication and encryption schemes defined for telnet, see e.g.: https://www.iana.org/assignments/telnet-options/telnet-optio...

Just because some old and rudimentary telnetd or telnet client doesn't implement it doesn't mean it doesn't exist / isn't used.


Those are data formats, and a client is needed to support those (on both endpoints). You better just use something modern like ssh instead of hunting for clients (and making sure they're reliable).


I mean, obviously we use SSH where possible.

But in many cases out in the real world we have to support frankly awful old equipment that only offers telnet (if you are lucky).

I don't think we will see the back of such things before 2030 either.


Telnet as a way of logging in to a remote system is bad, sure, but telnet as a program for connecting to a server on any port and talking the protocol manually was great, especially before http got so complicated.


I agree telnet can be used as a "client" for a different protocol, but for other means, its not a good solution.


Part of security is acknowledging that business needs can't always fit in rigid ideals and coming up with compensating controls.


Yes, but that doesn’t mean you just leave telnet running decades after it became obsolete. That should be a time limited waiver and mitigations, and if you have legacy devices which absolutely need Telnet you should be planning for what you’ll do when something that old finally breaks and you have the resources to port relatively simple code.


This is circular: sure, if telnet is _obsolete_, then remove it. But being obsolete exactly means no one is using it anymore. If someone is using it, then it's not obsolete.

Regarding security, some would advocate that telnet, or whatever else, is secure at least as much as the network underlying it. So anyone who puts their "legacy" telnet apps on a VPC is fine, and has decades more to enjoy software that has already been running for decades.


Telnet has been obsolete since the turn of the century. That doesn’t mean that nobody uses it but it does mean that everyone who does should be upgrading away from it.

Trusting the network for security was common in the previous century but standards have improved since then. For example, sending your password in clear text is no longer considered acceptable by mainstream security standards since it avoids the risk of passive network monitoring or accidental exposure.


Obsolete means no longer in use or useful. It's been argued in this thread that it's both in use and useful. But yes, there are more secure protocols that overlap with most of what telnet can be used for.


Obsolete doesn’t mean something has no possibility of being useful but rather that it’s no longer commonly used because there are better alternatives. My son’s beloved steam locomotives still run but nobody uses them for normal commercial service because they became obsolete shortly after the invention of diesel and electric.

> (of words, equipment, etc.) No longer in use; gone into disuse; disused or neglected (often in favour of something newer)

https://en.wiktionary.org/wiki/obsolete

Bringing that full circle, telnet used to be common but it has security issues (lack of encryption or integrity unless you tunnel over TLS, lack of modern authentication options, etc.) and so anywhere it’s still used we should be looking in to replacing it.


As discussed in the thread above, there are scenarios in which lack of encryption etc is simply not relevant. And when that is the case, why would you prefer a more complicated protocol with more moving things that can go wrong?


Add an example for you: why I need to encrypt the telnet traffic to my LXI multimeters on my bench?

It just doesn't make sense to encrypt everything.


Telnet is still in use for its primary purpose, which is a bit different than a locomotive which has been neglected to a tourist attraction. Your last paragraph means you would like to make telnet obsolete, but it isn't yet. As evidenced in the thread, it's not partical or desirable to replace it with other technology in all the deployments of it. So, it's still hanging in there for now.


I haven't used a telnet server in forever, but I do use the telnet client to connect and introspect various services from time to time.


I used to do that too but switched to netcat/OpenSSL s_client in the 2010s, especially as TLS everywhere caught on.


One does not use telnet as telnet when doing security work. Telnet + sclient (you can also use tons of other tools, too) lets you inspect many servers that have TLS security. SSH does not.


There are hacks, sure, but its not telnet. It becomes a "fork" of telnet which enables you to have more functionality. Thats not telnet.


Kudos to the python core devs. The core library is getting cleaner and more maintainable and the language is getting more performant and powerful for users. Long live the snake


Ys especially if one read their update on the macro one knows maintainability is in their mind.


This is a reminder that the excellent pyenv [0] project can help you manage all your Python versions.

- Set global and per-project python versions

- Not written in Python.

- Shims your PATH

- Linux, Mac, Windows

[0]: https://github.com/pyenv/pyenv


as of late I've settled on using asdf[0] rather than nvm, pyvenv etc.

[0] https://asdf-vm.com/guide/getting-started.html


Agreed, asdf is the last stop for me, and uses pyenv internally anyway for python


A hidden gem of pyenv is its 'python-build' plugin, which just lets you build and install any Python version in the least number of steps:

  git clone https://github.com/pyenv/pyenv.git
  cd pyenv/plugins/python-build/bin
  ./python-build --definitions
  ./python-build 3.10.8 /opt/python/3.10.8
  PYTHON_CONFIGURE_OPTS="--enable-shared" ./python-build 3.10.8 /opt/python/3.10.8


I’ve regularly used ‘python-build’ in docker container construction to build a minimal (or customised) self contained Python binary, libs tool chain, etc, in an isolated path to make copying to from the build container to the final artefact container. It’s just all around excellent tooling.


> Pyenv does not officially support Windows and does not work in Windows outside the Windows Subsystem for Linux.

I don't only use Win but I do use windows, so having to use different tooling makes pyenv a hard to swallow pill


There is a windows port that works great! https://pyenv-win.github.io/pyenv-win/



`python3.10 -m venv` or `python3.11 -m venv` the default approach works fine and I don't need pip anything to get a virtualenv, what's the selling point of pyenv?


In your example, how did you install python 3.10 and 3.11? That's part of what pyenv solves for you.


Dont't most linux distros have separate packages for most recent python major versions, so that they can't installed in parallel?


For most major distros, the oldest version of Python that they ship is usually the one that's used by other packages that depend on Python. It's not uncommon - especially for libraries - to need testing on something older.


Nix has every maintained python version, so does Arch User Repository, and those can definitely be in parallel.


it's very easy to install them, probably easier than learning pyenv but I have not tried. I just want to use the default settings as much as possible, they're guaranteed to stay as long as python is alive and typically have less surprises for me on daily coding.


Whether it's easy or not depends on the OS you're working on. On Windows, it's just another executable installer, so it's trivial. On macOS, the official installers are terrible (they have no uninstaller), so you want another way. On Linux etc., some distros have only one Python version in the repos, some have two or three, but the latest versions of all reasonable distros won't let you install Python 3.4.


> probably easier than learning pyenv

Yeah “pyenv install 3.10.8” is basically rocket surgery, near impossible to learn.


Install a python version on your system:

    $ pyenv install 3.11-dev (or use pyenv install --list to show list all available versions to install)
Use it in your current shell:

    $ pyenv shell 3.11-dev
Use it in the project directory:

    $ pyenv local 3.11-dev
Use it as the global default:

    $ pyenv global 3.11-dev
All of the mentioned commands can be called with no arguments to show the current python version for the context.

There you go, now you know pyenv. Call it with no arguments to show the other possible commands.


Not really windows, WSL. There's windows fork recommended. Classic python splitting the ecosystem up. Why something like this isn't in go or rust that's actually cross platform capable just seems like excess effort.


Because thanks to WSL the lingua franca of scripting languages is bash and that’s how pyenv started. Rewriting when it works fine on Windows and every dev I know uses WSL anyway is excess effort.

Hell, the fork could have used Go or whatever too but it went with a bunch of .bat scripts.


Hello from corporate America where WSL is most certainly not standard and will require a virgin sacrifice if you want to get it approved by IT.

Which is to say, no pyenv is not an option for everyone.


Indeed, I recently worked on a fairly simple deployment for a state government agency. When I described the process to their IT staff and mentioned that we prefer using WSL, they looked at me like I had grown 2 heads.


Even aside from that, WSL is only useful if you are writing code that will run on Linux at the end of the day. This is usually true for web apps, but that's not the only thing people write in Python. Libraries, in particular, need to be able to target Windows directly.


or even if you want to build a python binary for windows, you can't cross compile it as far as I've found


I should maybe test out pyenv again. I tried it several years ago but I ran into some limitations. Or at least I thought I did, I might just have used it wrong. My main usecase, back then at least, was to have just one global-like environment for every Python version I was working with. And sometimes and additional one for debugging/testing some different packagecombinations. Back then it seemed like, to me, that pyenv only worked on a per-project basis, but I might just have followed incomplete guides and not looked into it deep enough.


How does pyenv compare with conda, mamba, pip and poetry? I typically use conda but ever since they broke with python 3.10 I have been considering moving to other environment managers.


I have not been able to understand why conda is so popular. I have no trouble with scientific computing with plain old `pyenv` and have never had any issues with C extensions or compilation with plain old `pip`.

Conda seems to be the most prone to getting in weird states or just hanging while "Solving environment." I have been happier leaving it behind.

Really the only two I would even consider using now are pyenv and poetry.


What's your platform? Issues with compilation are far more common on Windows, for example.

The other thing about Conda is that it doesn't cover just Python and Python packages, but all kinds of software that may be directly or indirectly relevant. For example, on Windows, there are Conda packages for the Windows SDK, and for various C++ compilers.


Try Pyenv for the Python version and poetry for the venv and dependency management.

Once you're used to the workflow it's pretty smooth. (We switched from conda 2 years ago)


According to StackOverflow the conda 3.10 bug was fixed this January. https://stackoverflow.com/a/70614013/733092


Is there a problem with venv? I am seriously curious. I use venv for 10 years and so far it was able to do everything I ever wanted.


There's no problem with venv. Python is just a popular language and people have built a lot of tools.


venv is great, but it requires the Python version you want already installed. This is pyenv's job.


I love pyenv but hate that it forces me to specify the patch version when installing python. Most of the time I wish I could just enter 3.9 and have it give me the latest.


> Set global and per-project python versions

Does that mean I can do things like install black and jupyter once and use that install across projects?


Yes? I mean that's just how standard python works as well.


I use anaconda and every time I create a new environment I have to install jupyter and black in order to have access to them after running `conda activate $envname`.


Or just use Conda/Mamba.


If we're going to be making backwards incompatible changes to unittest, it'd be nice to introduce PEP8-compatible names, deprecate the old ones, and remove then in 3.20 or something.

(I know there's pytest and nosetests, but especially when teaching people, it's nice to use what's in the standard library and unittest sticks out for having things named differently)


Logging is another module that is non PEP8.

Would be nice to clean there up in some point to increase consistency.


or even just getting rid of sprintf formatting.


> I know there's pytest and nosetests, but especially when teaching people, it's nice to use what's in the standard library and unittest sticks out for having things named differently

You’re just crippling them for no reason.


yes, please! Very minor issue indeed, but this kind of cruft adds up and I'd love to see it fixed.


It’s a great time to be a Python developer. Python seems to be settling in to a really nice sweet spot of accessibility and power, and with the upcoming performance improvements it’s got an even brighter future.


Except for when it comes to packaging and distribution. That is still a nightmare for newbies.


Agreed, but Poetry works wonderfully if the features are sufficient for your use case.


Sure, but wait until you see what next year brings us!


Umm, I didn’t have a great experience with Poetry. For starters, I wanted to create a separate virtual env to install Poetry in and have Poetry install the project’s dependencies in a separate virtual env. There was simply no way to do that and I didn’t wanna install Poetry globally on my machine. I also didn’t wanna use docker just for this, so I realized that Poetry could just not be for me.


This isn’t a poetry problem. Use pipenv for this. It’s pip but uses separate venvs to install packages. Pipenv install poetry will do exactly what you want I think.


Cue the bevy of devs complaining their mission critical script, using features deprecated a decade ago, will no longer work.


I hope we don't end up with the same things as the java world. JDK9 removed and moved to a library a few things like JAXB. Since JDK11, every new version is harsher for programs that use reflection to mess with non-published internals. Both are good changes, BTW.

In theory, you add a few libs and you're ready to upgrade. In practice, adding libs at scale is hard, and quite a few dependencies of dependencies of ... are using reflection to mess with internals.

As a result, there are still a lot of enterprise applications on JDK8 and JDK11, even if the security impact of this is bad and the fix should be easy.

For a typical example: see https://github.com/x-stream/xstream/issues/101


> Both are good changes, BTW.

I disagree. The entire point of setAccessible is to say I want to access nonpublic things. I shouldn't have to also say "Simon says, pretty please" for each such access on the command line to be able to do so.


Marking random hidden internals accessible doesn't mean a private API becomes public. setAccessible is a useful tool for debug access, or for when an annotation gives extra guarantees, or you can use it on code you have access to.

But you can't expect from your API vendor that they won't ever again change the internals of the implementation just because you forced your way in and monkeyed with the internals. Writing your code this way is a surefire way to need rework at some unpredictable time in the future.

If you're lucky, it just crashes. A worse possibility is subtly corrupting and destabilizing your program. That was what happened in some of these cases: hashcodes were stored/cached in collections after an upgrade, and people using reflection to 'restore' a collection didn't update these caches correctly or dropped them in the wrong hash bucket. Then the hashmap had elements that were both there and not there, depending on how you queried it.

One of differences of a senior engineer is that you not only say it works today, you can guarantee how it stays working long term, even when the environment changes. Things like tests and comments are part of that. API contracts are a big part of that, both in being explicit about them as an API provider, and not touching non-guaranteed parts as an API consumer. Using setAccessible like this is a grave violation of an API contract, and it takes away your ability to upgrade to later version.


Those are all good reasons to not want to use setAccessible, but they're not good reasons to be unable to do so.


https://docs.python.org/dev/whatsnew/3.12.html For a lot more info on removals


> Removed many old deprecated unittest features ...

> You can use https://github.com/isidentical/teyit to automatically modernise your unit tests.

I tried out teyit when it was a "Show HN" 10 months ago (at https://news.ycombinator.com/item?id=29948813 ).

It not only pointed out places where I was using a deprecated alias, but also fixed a few places where I was using the API poorly (using "assertTrue(a binop b)" instead of "assertBinOp(a, b)".

I recommend it.


I can't comment on the quality of the tool, I just don't like this attitude of "just use this tool we don't support, hosted on some platform we don't control, which might disappear tomorrow". Back in the day, this tool would have been shipped (and supported) with stdlib, or at least be frozen somewhere "official".


That tool is not a permanent addition to the list of dependencies, but for a single migration effort. As such, it is unnecessary to include it in the standard library.


It should still be hosted somewhere that won't disappear tomorrow if GH goes bust or starts packing malware.


It's also on PyPI. https://pypi.org/project/teyit/ .


Then that would have been a better link to publish in docs.


If you really go "back in the day", this sort of tool wouldn't exist at all, and you would be expected to do it manually, because there were no easy ways to modify the syntax in a space- and comment-preserving way. ;)

That was developed for 2to3.

Huh. I did not know this - in lib2to3, the "asserts" fixer handles this sort of conversion, so you have the ability already .. so long as you don't use new Python 3.10+ syntax that lib2to3 doesn't handle. See https://docs.python.org/3.7/library/2to3.html#2to3fixer-asse... .

I wonder if serhiy-storchaka (who committed that What's New entry) knows about lib2to3's "asserts" fixer. And if that might be a relevant addition or change to the documentation.

Someone who cares about this might want to point it out.


Sad to see distutils going away. I have very old projects which use distutils for distribution. Wrote them in Python 2 days and they were ported easily to Python 3. But distutils going away is going to break them.

I know setuptools is advanced and recommended but distutils worked fine for me and my users for so many years. It is going to be overhead for me to comb through all my projects and replace distutils with setuptools, test them and testing out packaging and distribution is quite a lot of work.

I am growing unhappy with Python due to these breakages. Is there some other programming language whose maintainers don't break the "user space" like Python has been doing time and again?


Good news for you:

> For projects still using distutils and cannot be updated to something else, the setuptools project can be installed: it still provides distutils.

From: https://docs.python.org/dev/whatsnew/3.12.html#removed


> other programming language whose maintainers don't break the "user space"

I think Go qualifies here. (but it's still much younger than Python).


A warning that while there's something called sysconfig that replaces distutils.sysconfig, it is not compatible:

https://github.com/libguestfs/libguestfs/commit/26940f64a740...


I've been pretty happy with Ruby. I don't ever recall changing my ruby scripts for a new release (whereas with Python it's been many times). Rails is a different story although rails is not ruby. Ruby is similar enough to python that most pythonistas can already read it, so learning effort is mainly just the API differences.


> smtpd has been removed according to the schedule in PEP 594

Damn! I guess it's time to cross out my snippet of running a debug smtp server in almost any Linux distro & Mac:

python -m smtpd -n -c DebuggingServer localhost:25

> Remove the distutils package. It was deprecated in Python 3.10 by PEP 632

So apparently we need to replace

from distutils.version import StrictVersion

with

from pkg_resources import parse_version

A bit labor for my code but OK.


Darn the smtpd thing is going to be annoying next time I try to debug a mail sender. It was super convenient. Shame they didn’t leave just that.


I use that one too. An alternative might be https://aiosmtpd.readthedocs.io/en/latest/cli.html


packaging.version.parse() might be the canonical replacement?


The largely-unmaintained Python 3 port of syncthing-gtk already failed to start on Python 3.11 since it removed obsolete methods like bind_textdomain_codeset(): https://github.com/void-linux/void-packages/issues/40430

Though this project isn't exactly in the best shape, having prior bugs: https://salsa.debian.org/debian/syncthing-gtk/-/issues/2


Don't think I was using any of these removed things, and it all sounds sensible looking at the fuller list at https://docs.python.org/dev/whatsnew/3.12.html#removed

> Remove the filename attribute of gzip.GzipFile, deprecated since Python 2.6, use the name attribute instead.

2.6 wow, that's some compatibility right there. About time that got removed then! Either that or undeprecate it, if it's fine to use. Any decision at this point is a good one.


Perhaps after the big mess of the backwards incompatibilty of 3.x vs 2.x, they are erring on the side of caution now.


Can't blame them looking at some of the responses in this very comment section.


> Support for the Linux perf profiler to report Python function names in traces.

hidden gem


Is there a standard way a Python script can ask for an older version of the environment? If I could insert a few lines of code into the top of the WikidPad source, to specify an OLDER version of wxPython, for example... I'd be able to switch to Linux from Windows.

As it is now, there were breaking changes in wxPython (likely due to the culture of breaking working code extant in Python) which result in WikidPad being broken.

It seems to me that if the Python community continues in this direction, nothing will work more than 3 months after it's last github commit.

[edit/append] No, I'm not sure it's wxPython and not wxWindows that is the issue. I'll have to stuff my Linux boot SSD in, and then try to build WikidPad again to know for sure (it's been too long)

I didn't write WikidPad, it seems to have been last maintained about 2012, but I use it for my notes, etc. I really like it, but the breakage on the Linux side is a show stopper. It's really unfortunate that Linux doesn't have a stable API like Win32, and forces dependence on source code.

I'd consider installing an older version of Linux to force older python, wxPython, wxWindows, etc... and try to figure it from there... the last time I looked at it I got a wall of confusing errors, and couldn't patch it enough to get any functionality out of it.


wxPython is a Python module. It isn't the same as Python itself. Don't blame the language for issues you're having with libraries written in it.

On top of that, wxPython is a Python module which interfaces with an external library (wxWindows). Are you sure your problems are even with the Python module, and not the result of breaking changes in wxWindows itself?


Sure, just put something like `wxPython < 4.2.0` into your `requirements.txt`. https://pip.pypa.io/en/stable/reference/requirements-file-fo...


As near as I can tell, the repository [1] doesn't have a requirements.txt

[1] https://github.com/WikidPad/WikidPad



pipenv or anaconda are the way to get reproducible python environments


Talking about depreciated item. There are a lot of pyenv in the discussion. And even suggest it is to augment venv. But is that depreciated is since 3.6?

“Deprecated since version 3.6: pyvenv was the recommended tool for creating virtual environments for Python 3.3 and 3.4, and is deprecated in Python 3.6.”

Version is a nightmare btw. I have Conda just installed but sometimes the python is not the same as python3, one to 3.9 and one to 3.10. And unless you know the difference of pip3, pip and python -m pip (and not pip3…).

Using venv then somehow the script Kivy switch it out and fall to python 3.9. Where is it? And then it fail as there is no python in my macOS but just python3. Struggle whole night on how to fix the bash alias and fail then use symbolic link.

Just a “user” guys. Do not confuse me please.

“ There should be one-- and preferably only one --obvious way to do it.” sigh.


venv is not deprecated - it's still the standard way to create new environments in Python.

pyvenv, which was removed in Python 3.8, was a wrapper script around venv. The same functionality today is invoked via `python3 -m venv`.


This is enough for me to swear off python for any long-lived or production system. I am tired of this kind of endless incompatibility. It has fractured the ecosystem sufficiently that things break all the time when trying to use any non-trivial set of dependencies. Maybe the developers will stop this crap with Python 4. It’s the next Perl 6!


One thing that’s ticked me off is Python has been expanding the standard library.

That’s a pain when you already are using libs that imitate the “new” behavior.

Either the new tools are less good, or they destroy development in the existing tools.

Twisted was, and is, a really cool framework. Unfortunately, it’s doomed.

There were a ton of really good datetime utilities before … datetime.

Oh well. Complain complain.


I don’t think datetime is a good example, been around for a long time. Dataclasses might be one.


Speaking of standard lib, today I was looking to do a sliding window over an iterator and ended up on the pairwise function in itertools that was added in 3.10.

Seems like such a wasted opportunity, to add a sliding window for n=2, and then in the documentation add a recipe for a sliding window function for any n.


There are a lot of things like this in the boltons library: https://boltons.readthedocs.io/en/latest/iterutils.html#bolt...

It's nice to not have to depend on anything external, but boltons is something I just treat as an expanded part of the standard library.


Their suggested replacement for smtpd is a package that hasn’t been updated in almost two years.


smtpd barely had a meaningful commit since 2014.


I'm curious if there's anything preventing them from switching to semver.


Their own decisions. They want to spread little breakages over many realeases instead of collecting them into big breakages, due to the 2 to 3 fiasco. Feels like an over-correction imho.


Have you been reading the same comments as I have in this thread? It's all a bit nuts if you ask me.


I thought having smtpd was useful but I guess people must be using better maintained alternatives.

Anyone care to comment on what they are using for receiving emails in Python?

Update: smtplib is still on, so that makes sense.


> Anyone care to comment on what they are using for receiving emails in Python?

In most cases, you use an existing MTA (like Postfix or whatnot) and set it up to deliver mail to a Python script. Or, even less directly, you use an IMAP library to access mail after it's delivered to a mailbox, or use a mail provider which can call a webhook over HTTP when an email is received.

For the rare situations where you do really want to write your own mail server, aiosmtpd exists, and the migration process doesn't look terribly complicated: https://aiosmtpd.readthedocs.io/en/latest/migrating.html


Aiosmtpd is great. A simple script to send all received mails to something like slack is demonstrated here: https://github.com/ont/slacker


Is there any model or rationale behind introducing breaking changes in a minor release? What are the version numbers good for if not to indicate how important the changes are?


Python has never followed semver. It's saner to treat minor releases as semver-major, and patches as semver-minor.


The changes seem to be in modules.

What is the relation between a new Python version and changes in modules?

Are some modules considered to be part of the language?


Those modules are parts of the standard library.

As far as CPython is concerned, most of those modules are implemented in C, and therefore for practical purposes part of the interpreter itself.

> How can a language be released?

If we really want to be pedantic, a new version of the language spec can be released.

Translated to C++ terms, this is like if the ISO committee approved a change to the stl in the new standard.


They're in the standard library, like `std` or (ish) Node API, etc.


The stdlib is linked to the version of the language, yes.


I wonder if there are languages (that have a stdlib) where that isn't the case


The c stdlib is more tightly coupled to the OS then any particular Compiler; and is not changed when you select a particular dialect (though the language standard dose specify what stdlib must support).


Breaking changes in Python releases are nothing new, that’s why none of my Python projects are using the latest version :shrug:


The title is referencing this line (not TFA's actual title):

> In the unittest module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).

I doubt your projects are using 3.1 or 3.2 (or older, except perhaps 2.7, but then you wouldn't care that 3.12 was removing something deprecated vs 3.11 which has it)?


I do not know what is wrong with Python, because I use it only occasionally, for short programs that do not use obscure features.

Nevertheless, there is no doubt that it is the worst software project that I have ever seen, during several decades, from the point of view of keeping compatibility between versions.

For other programs, I may happen to need to have installed 2 versions, or maybe 3 versions, at most, in order to be able to use other programs that are compatible only with certain versions.

On the other hand, for Python, during many, many years, I have been forced to keep installed all the time around 7 or 8 versions, to keep happy many other programs that claim to be compatible only with certain Python versions (and not also with any newer versions; some programs are even compatible only with a list of non-consecutive versions).

Building from source any program that depends on Python may frequently require various temporary configuration modifications, to ensure that the program is built only for the Python versions with which it is compatible.


I will say this, I love python, and I hate python the way only someone who loves it can. I think everything you said is accurate.

I love writing Python code. I love its standard library and many third party libraries. But I loathe Python’s dependency management, and Python version upgrades are sometimes a huge pain. There are tools to help with this (pyenv or I am fond of asdf for python versions; poetry or pipenv for isolating package dependencies), but they only partially solve the problems and introduce new ones.


There's quite a bit more than just the unittest removals

https://docs.python.org/dev/whatsnew/3.12.html#removed

An example of a more recent deprecation is the 'distutils' module which was deprecated in 3.10.


> An example of a more recent deprecation is the 'distutils' module which was deprecated in 3.10.

distutils was functionally deprecated long before that. Even the Python 2.7 documentation (released in 2010!) recommended that users avoid distutils and use setuptools instead: https://docs.python.org/2.7/library/distutils.html


Isn't setuptools deprecated now too or on the chopping block?


I believe setuptools is more frowned upon - in favor of PEP 517/PEP 518 tools - than deprecated.

Last I checked, there was no good way to package hand-written Python/C extensions than setuptools, but that was a couple of years ago.

In any case, there's a large installed-based of setuptools-based projects making it hard to get rid of.


Deprecated does not mean removed. They are still there, working fine, having no big reason to change them. I know it, as I just checked my and my workplaces code, and there are several usages of deprecated code. And we are on 3.9/3.10.


Merriam Webster lists the following relevant meanings for "deprecated":

> to express disapproval of

> to withdraw official support for or discourage the use of (something, such as a software product) in favor of a newer or better alternative

In other words, deprecated is a strong signal that one should strive to migrate from items marked as such.


So what? As long as it works, don't change it. This parts are deprecated since up to nearly a decade (Python 3.2 was released February 20th, 2011). And they will work for some more years. Some have not even a working replacement yet. Investing now in changing old code just to be compatible to a not even released version, is a bit pointless if you have more important work on the list.


Python 2 was supported for way longer than initially announced. Distutils as well. Maybe deprecated is not a strong enough signal because many projects hesitate to cut stuff loose that they deprecated because of the community backslash. Sometimes things have to be deprecated, but the community will never get around to adapt to that if they stick around forever.

Compared to that, Java still carries some deprecated items around. But with Java 9, deprecation for removal was introduced, which is a strong signal that stuff will be gone in two releases. Kubernetes has a strict deprecation policy as well, such that skipping more than two releases is asking for trouble.


Except the notice is about "removing deprecated modules"? Deprecation usually means "These are going away at some point. Be aware.".


It should mean 'next major version'.

But the Python community fear major version changes. So the minor versions become major versions.


The move to web based apps or connected in general was a genius move by the programming community.

Now there will be an endless churn to keep programs up to date for "security reasons" and that cost money.

We can even pull the plug on programs nowadays to the give the users the very best experience! Otherwise the lusers might have felt satisfied with what they have.


Actually this is a return to the past, we used to call them timesharing systems.


You say that, but the web platform rarely if ever removes features in this way. My web code from 15 years ago works the same as it always did.


Obviously you chose the wrong framework or even worse - none at all.

Ye I am whining but only half joking.

Breaking backwards compatibility ruins my mood. And Python are getting way to much slack for what they have been doing to the users.

People wanting to stick to 2.7 are ridiculed and soon the software security police will start arresting offenders.


I recently upgraded a Python 2 + Django 1.x project with no tests to Python 3 and latest Django.

Using 2to3, the most trouble I had were db migrations, which I just squashed. The rest were ferreting out str() problems that 2to3 didn't find (eg. redis package happily taking strings but returning bytes by default), Django regexp url changes, and trivial stuff like that.

It took a couple of days work.

I also wanted to upgrade the frontend part (webpack build of Vue). I stopped after a few hours od going nowhere and am still apprehensive of approaching that particular thing.


If people that want to stick to 2.7 can find a vendor that supplies them security patches, then there is no problem.


wait, since when python.org has a discuss subdomain? And it's running Discourse which is built on RoR? What the...



The Python community tends to favor pragmaticism over purity.


Not sure how the Not Invented Here syndrome would help in this case.

Besides, a few years ago they moved from Mercurial to Git(Hub).


Why does it matter?


Great, so now not only Python will be incompatible with itself, but also incompatible with itself even if you use Python 3.

Sigh.


Or, the glass half full view of the world suggests that low usage high barrier to maintenance module deprecation increases overall long term improvements and compatibility at the cost of small, planned, notified, short term inconvenience.


I think you're missing the point. Nothing is wrong with deprecation, but it should be done properly. Ideally, breaking changes should correspond to a major version increase


You should sign up to maintain those ancient modules


Several groups of people offered to maintain python 2. They were told very clearly they could not do so "officially", and we're even threatened with lawyers if their thing looked like it could be mistaken for "Python 2".


Here's an actively maintained version of Python 2:

https://www.activestate.com/products/python/python-2-7/

As you can see, it is still offered, and the PSF doesn't have a problem with that.

The problem with those other groups is that they didn't want to maintain Python 2. They wanted to take it and develop it, evolving the language separately from Python 3 - while also calling the result "Python 2.8" (and presumably later Python 2.9 etc). It stands to reason that people who own the brand don't want it to be associated with a third-party fork that's making its own major design decisions, no?


I genuinely thought some people wanted to do this, but we're refused, so I've learnt I was wrong.

It would be nice if this was linked from python.org as well.


Do you have pointers to that? The one I saw was called “python 2.8” which is definitely a point of confusion for people who might think it was supported.


Look up Tauthon , believe that was the name.


Yes, that’s the one. Note that they’re still there – the PSF just didn’t want them calling something which isn’t made by the Python developers Python 2.8. Once they adopted the Tauthon name they were fine.


So don’t maintain it officially and use a different name. What’s your complaint exactly?


Why do they need to be maintained? Just leave them as is.


Apart from requiring security fixes, they also sometimes block changes and newer features. At that point, the maintainers might have to choose between breaking the obsolete module or shelfing the feature/fix.

Deprecations are the way to indicate that support for the module will be eventually dropped. There is no reason to leave those modules there if they cannot be relied on.


Because supposedly people will report bugs for them and someone will need to fix them.


“Sorry, we don't fix bugs in deprecated code. Closed.”


“Deprecated” usually implies intent to remove. Regardless, even dealing with such rejections is a support burden. And security vulnerabilities will tend to be fixed, even in deprecated code.


Code rots.


Have you tried Lua.. they brake for noone..


What? Lua has breaking changes every release.


Exactly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: