Hacker News new | past | comments | ask | show | jobs | submit login
Useful Python decorators for data scientists (bytepawn.com)
233 points by Maro on May 23, 2022 | hide | past | favorite | 100 comments



It's kind of cool that this is possible but I'm not adding that kind of complexity to my code unless I really need to. And I really never need to.


Decorators are one of those language features that I want to use more of, but every time I attempt to, I realize that what I'm doing could be achieved more easily another way.



Another I use is tenacity, when I need to call unreliable external services:

https://tenacity.readthedocs.io/en/latest/


Aside from @dataclass, @lru_cache is my most used.


I created a decorator for validating arguments for functions that are receiving arguments from user input. It eliminated _so much_ boilerplate code.

I use decorators quite a bit. The feature is incredibly powerful. Once you find @lru_cache, you start realizing how many ways you can take advantage of the feature.


@cached_property is the new lru_cache for me, much of the time :^)


Have felt the same way till I used one for fastapi , One can literally implement production grade* Machine Learning pipeline in 15 lines of code using transformers in a single python file;

  from fastapi import FastAPI
  from pydantic import BaseModel, constr, conlist
  from typing import List
  from transformers import pipeline

  classifier = pipeline("zero-shot-classification",
                      model="models/distilbert-base-uncased-mnli")
  app = FastAPI()

  class UserRequestIn(BaseModel):
      text: constr(min_length=1)
      labels: conlist(str, min_items=1)

  class ScoredLabelsOut(BaseModel):
      labels: List[str]
      scores: List[float]

  @app.post("/classification", response_model=ScoredLabelsOut)
  def read_classification(user_request_in: UserRequestIn):
      return classifier(user_request_in.text, user_request_in.labels)

*: Production grade if used in combination with workers, A python quirk I felt is not relevant to the topic of decorators.


@production looks risky, compared to checking an environment variable. Hopefully that host_name function is guaranteed to return the same result every time it is called, and can never fail or raise an exception.


It's also a bad idea, because it implies you have different implementations for each environment.

Which means your dev and production environments are quite different, increasing risks of letting stupid mistakes slip into prod.


Oh so true. No matter how you express it, “if production” will likely land you in debugging hell sooner or later.


You can do the @production one with an envar or any other mechanism. I've been using it for years (with an envar) and it works really well.

Besides switching between something like production and test you can catch a case where an envar isn't set or is an unexpected value.


To add to the list of very handy footguns:

@reloading to hot-reload a function from source before every invocation (https://github.com/julvo/reloading)


That looks amazing, thank you! I wouldn't want it in my production code but it seems like it would be great for something that I'm working on in the REPL.


if you are using a REPL, first you should be using ipython. Second you should be running:

%load_ext autoreload %autoreload 2

now all your imports will be automatically reloaded.

So good that I have it in ipython startup scripts


There's also reloadium [0] for tight dev iteration loops

https://reloadium.io


I've only really ever needed one decorator. The lovely @fuckit from fuckit. Tells your code to ignore all errors and keep on truckin.

Great for prod if an error is unnacceptable

(or more realistically, web scraping where you just don't care about one off errors)


Does it just wrap the code in a context manager where __exit__ returns True or something?


I'd imagine it's just a try/except wrapper with a funny name.


It's not just a try/except wrapper, it's a try/except wrapper for every single statement individually in a given class or function :)

It has some ast stuff built in.

@fuckit

def this_is_fine(garbage_in):

   x = 1 / 0
   return "garbage_out"

this_is_fine("") ### Returns "garbage out"


All of these are great examples of useful higher functions, meaning they take and output functions.

I just wouldn't use them as decorators in proper code.


Why not tho? Decorators are very explicit markers of higher functions. When I see one, I know what's for.


It's rare that all users of a function will want it to be "production" or "parallel" or the rest in the piece.

Applying these functions as decorators, i.e. with @, means you can't run the non parallel version, or the test not in production, etc.

In the end, decorators, though nice on the first day of usage, reduce composability by restricting usage to whatever you wanted on that same day.

(this is not a general remark, it doesn't apply to DSLs that use decorators, e.g. flask)


To the extent that this is a real issue, you could just use a decorator that exposed the wrapped function as an attribute of the returned function object.

You could even write a higher order function, callable as a decorator, that would transform a decorator that didn't do this into one that was otherwise identical that did.


You’re saying write it this way instead?

    def my_func:
        ...

    my_decorated_func = my_decorator(my_func)


No they don't.

The decorator pattern is a well known one, where one "decorates" a function by passing it into another function. GP expresses that they would avoid the pattern with these decorators.

The decorator operator is essentially prefix notation of the form `f1 = f2(f1) = @ f2 f1` which is what the GP alluded to, i.e. that f2 is a higher order function since it takes a function and produces another function. In-fact, the @ operator is a higher order function as well since it takes 2 higher order functions.


I am strugeling a bit with the @production decorator. I never had the issue of only wanting to run a function on prod. Often I want slightly different behaviour, but then id use env variables. (say using a Prod and a Dev API adr or DB adr). Wouldnt it also be better design to keep dev and prod as close as possible?


There's more environments than just dev and prod. Even if the environments differ having some extra telemetry in a test environment often makes sense. So a @telemetry or @production decorator that does something in a non-production environment is an easy way to add that capability to functions.

I've found I like a decorator better than some envar testing custom logging/telemetry. If I have a custom_log function I use everywhere, I have to use it everywhere I might want it. With a decorator I can add it to only a few functions and get far less noise.


A better title would be 'Useful Python Decorators by a Data Scientist'.

Those decorators are exactly what data scientists would do, while software engineers would be terrified.


> while software engineers would be terrified

At least surprised. There are solutions to these problems already.

Author seemed to have learned decorators and is enthusiastic about abusing them, instead of learning the stdlib.

The capturing of print statements. Why not use the Logging machinery, instead?

Or the @stacktrace, it seems what they really want/need is a debugger.

But anyway, if these solutions fit their programming style better, so be it.


To be fair, logging is badly made. There are at least two libs to make it human again.


Indeed fair. But even if I preferred the decorator style, I'd be cautious about dropping a stdlib or any other mature solution.

When I create my own solutions, I tend to underestimate how hard it will be to get it working bug-free and to maintain it.


Which ones do you suggest/recommend?


im a big fan of loguru.


@redirect could be handy when the print statements are embedded in someone else's code.

We use a library that is very...chatty (some function calls send a screenful of info/progress to the screen), and I think I'm going to steal this to make it quieter.


I found them quite creative tbh. Bad software engineers also tend to hide behind their best practices.

It would be more interesting to point out the parts you feel are so terrible.

Not everything has to be designed for a super critical prod environment with >10 coders working non stop on it.


> Not everything has to be designed for a super critical prod environment with >10 coders working non stop on it.

You don't need a super critical prod environment to have decent code. Half of these are hardcoding environment configuration, others have hidden side effects that the caller of the function cannot control at all, and others are badly reimplementing things that already exist (@redirect -> you want logging for this, @stacktrace -> use a debugger)


A debugger isn't necessarily available in all environments. At my employer data scientists do a significant amount of their work in Databricks, where as far as I know it's impossible to drop into a debugger to trace execution.

That said, I'm not really defending these specific decorators.


Well, there’s your first thing to fix.


They did.


> Not everything has to be designed for a super critical prod environment with >10 coders working non stop on it.

And even when it does, cargo-culting rules-of-thumb is generally the wrong way to do that. Best practices are better treated as the Pirate Code than the Divine Writ.


I find them creative too - and I didn't say they are terrible. Actually, I'm also from data background and that's exactly the type of stuff I would come up with too.

But as I'm recently trying to improve my software skills, I notice that while those are indeed useful in the short term, in the long term they are not worth the price. The @production one, seems like a disaster waiting to happen.


> Bad software engineers also tend to hide behind their best practices.

I don't believe I've ever encountered this. Can you elaborate on what you mean?


Easiest example is when you have 100% unit test coverage but still nothing of actual consequence is actually tested.


The case I've seen most frequently is developers falling into a a bit of a trap because code bases from several years ago don't look like they were written yesterday. "Best practices" is often the justification for taking on work that doesn't have a clear benefit.

Some folks _really_ insist on changing everything to be "modern" and follow "best practices" using "up to date tooling" (invariably for a non-consensus-but-very-cool definition of "modern" and "up to date"). Often, it's switching to something that's only been around for 6 months over tooling that's _incredibly_ well supported and has been around for decades. I'm not opposed to using something new, but give me a reason beyond "it's new and everyone uses it now". That's doubly true when the new approach has the old tooling as a dependency and is basically a different interface to the same things (i.e. adding a dependency without taking one way).

There's lots of things that need to be updated to be more modern, sure. But there's also a trap of lots of tempting-but-relatively-low-value "best practice" updates that some folks will insist on spending 100% of their time on.

Another common example is some variant of this situation:

"Yes, X looks like a wart and is for many common use cases. It's there because of functionality Y needed by projects A,B,C. Downstream projects D,E,F,G already have workarounds in place where it matters. If you remove the wart, it breaks key functionality for projects A,B,C and means that D,E,F,G have to change the way they use this. Sure, you could handle this in a different way that could be a bit cleaner, but is it worth changing? Changing it is non-trivial and means a bunch of other people suddenly need to do extra work for no clear benefit. Oh, you really think it is, and want to devote the next 6 months to doing that and only that..."

Sometimes things really need some love and attention to get up to date. However, it's also important to avoid work that's temping to do, but low-impact and high-risk (in terms of unintended consequences).


>Those decorators are exactly what data scientists would do, while software engineers would be terrified.

Actually, when I read the post I'd guessed this is what an ex Software Engineer who is now a Data Scientist would do. And looking at the author's LinkedIn confirmed it.

You have to have a software engineering background to come up with this stuff in the first place.


I often use my custom timer decorator to time execution of functions/methods. This is not the only way I can do that, but I think it's a very convenient option.


I never liked decorator/annotation pattern.

It sweeps complexity under the rug with a deceivingly simple facade.


That’s called abstraction. Decorators are sugar over simple composition, nothing more.


The redirect() method shown in this post only works for output that is written via the Python sys.stdout handle.

If your code links against C/C++ extensions that have internal printf calls, then you need something lower-level, which can intercept the data at the file handle.

There's a great solution for that on StackOverflow: https://stackoverflow.com/a/22434262/162094


Even on the Python level it seems much better to use the logging module and attach a handler to sys.stdout. if all you need is to print some stuff to the console just attach a handler with logger.addHandler(logging.StreamHandler(sys.stdout)). You can attach other handlers for files or to send the logs via some network interface and you can silence or filter as desired. You can even attach metadata so that you can process the messages in different ways depending on what you want to do with that metadata. This can all be done separate from the calling code so the main logic just need to know log this message at this severity and all the processing filtering and transmission will be handled by the logger.


I'm a big fan of the logging module, but it can't solve the problem of silencing or redirecting noisy C extensions. For extensions which you control, maybe the ideal option would be to offer logging configuration, or callbacks, or even integration with Python's logging module. But for externally written extensions, your only option is lower-level manipulation of the relevant file handles. Python's sys.stdout and logging module are powerless in that scenario.


Could you give an example of "attach metadata"? Are you talking about attaching metadata to the logger object or handlers? How is that done? If I recall correctly, the only thing you can send with an individual log event is a string message, and an log level, no other metadata.

I looked into this because I love structured logging, and I wasn't able to figure out how to make that work easily with the default logging module.


A solution I have used is to creat a logging recordfactory similar to what’s described here: https://stackoverflow.com/questions/59585861/using-logrecord.... There’s also logging adapters as described here: https://docs.python.org/3/howto/logging-cookbook.html. The docs how-to also describes how to add contextual information via a filter. Not knowing your particular setup I can’t say how well any of these will work for you, but these gave me all the tools I needed to selectively attach IDs and other metadata to celery task logs so I can easily grep for the task ID or a few other unique pieces of data and easily see the whole log for a given task or for tasks that all used the same device, etc.


One place I am using decorators is pytorch in combination with hydra. Decorating functions so that an instantiation returns a function , not a call of the function. This allows easy reuse of existing functions in ETL pipeline setting it up with config files and hydra.

But I have to say, the native python logging module is pure evil.


I just posted another comment advocating that using logging would be better than redirect: I use it quite extensively in production and am mostly happy with its setup and use: what makes it evil for you and what is different about data science that makes it so?


Decorators make it hard to step through functions in a debugger. Or hard to see what is going on.


Is it really that bad? At the end of the day a decorator is itself just as function call, not particularly magical in any way. The alternative would be reimplementing the logic of the decorator in the function to be decorated, which isn't necessarily better.


I am struggling a bit with decorators. I find myself in a lot of situations, where I could use a decorator on a function, but only some of the time.

In this case its much more convenient to design it as a higher order function.


This is either a bug or could be handled using `enumerate(..., start=1)`:

    for i, line in enumerate(lines):
        i += 1


> for data scientists

> Let's assume I write a really inefficient way to find primes

let me stop you right there


These decorators are an absolute nightmare, sorry.

- @parallel: Doesn't let the caller configure the actual amount of parallelism, and no, the only overhead is not "having to define a merge function". Creating the processes has quite a bit of overhead and calls might end up being far slower than just a regular serial execution. Also, you might not want to eat all CPUs on a single function (not to mention that cpu_count tells you the amount of CPUs in the system, not the amount that are available to your process).

- @production/@deployable: Functions that sometimes return values and sometimes don't depending on the hostname? That's just a bug waiting to happen. Also, at least use an environment flag and not a hardcoded list of servers.

- @redirect: Don't reinvent the wheel, use the logging module. Log messages will be more useful, you'll be able to redirect them wherever you want, enable, disable, increase the level, even use different logging modules and enable/disable them at runtime. There are even handlers for pretty IPython output.

- @stacktrace/@traceable Somewhat more useful but, again, the logging module usually does this. Also, debuggers exist.

If anyone is using this, be very aware of where you're using them, how and possible side effects and maintenance problems. Personally, I wouldn't let anything like this run in a somewhat serious codebase.


Another classic of the genre I've seen is excessive memoization, done using precisely these sorts of decorators. By the time it has to be debugged in production, the culprit has left the building. It can even obscure the source of other bugs on top of the ones it introduces.

Long story short, if you want to have a cache, have a cache. If you want to make something parallel, make it parallel. This stuff just gets in the way. The only real exceptions I make for decorators by and large are for pydantic things and similar. It's very noticeable that if you're doing stuff in pytorch lightning you don't tend to fiddle with decorators much at all but in tensorflow it's common. Cute code isn't good code, end of story


- @redirect - logging

This suggestion (logging) doesn't help if it's e.g. other code you don't have control over, but this is also available as a context manager in the standard library as "redirect_stdout": https://docs.python.org/3/library/contextlib.html#contextlib...

At least the article doesn't advocate inserting them into __builtins__ (which I have seen people do with their own special custom functions before).


> This suggestion (logging) doesn't help if it's e.g. other code you don't have control over,

Most of the time, other code uses the logging module so you do have control over that output. I haven't seen code not controlled by me that uses print() without me wanting it to.


> Most of the time, other code uses the logging module so you do have control over that output. I haven't seen code not controlled by me that uses print() without me wanting it to.

If you are a traditional software engineer, the other people's code you deal with as an upstream depency probably looks different than the other people's code data (or many other) scientists work with.


> @stacktrace/@traceable Somewhat more useful but, again, the logging module usually does this. Also, debuggers exist.

While logging is good for production code, the tracing helpers look very useful for developing or debugging. One of the first things I do when troubleshooting why something is failing (in tests, or when reproducing things locally where I can run things with a visible console) is to print out _what was passed in_, or what the mid-function state is, so that I don't have to constantly query them in the debugger. Many many lines that look like

    print(f">>> foo: {foo}, bar: {bar}")
Being able to automate this in a more convenient way seems very useful. The icecream library [0] does this, probably in a more robust way, but I've never used it because I always forget about it.

0: https://github.com/gruns/icecream


Agreed, most of these would be more suitable as configuration.


Dear god, so much antipatterns on a single page.

Interleaving configuration management in the business logic code is a massive technical debt waiting to engulf future generations in a world of maintainance pain.


The whole Data Science / Jupyter ecosystem reminds me so much of the old days of "MATLAB to C++": the Python ecosystem was supposed to be the best of both worlds and ease the transition from prototyping to production. Laziness won once again.

Data scientists need to be trained with software engineering skills, software engineers need to be trained with data science skills. This is the only way we do not end-up with non-sense like this.


> Data scientists need to be trained with software engineering skills, software engineers need to be trained with data science skills.

It's a nice idea, but it turns out to be a pretty big ask. Particularly at my employer, where a large proportion of new hires come straight out of college. Data scientists have usually studied something like economics, math, statistics, or physics; most of them haven't been introduced to software engineering at all. We try to bring new hires up to speed, but there's only so much we can do with a series of relatively short sessions on Python and git.

Similarly, software engineers don't necessarily have the requisite background to understand the kind of work data scientists do. They'll have had a few semesters of calculus but it's likely they won't have had much if any exposure to data analysis or machine learning. They might not have even had a stats course in college. Further, in my experience they have had little inclination to understand how data scientists work, nor how their software products may or may not fit data scientists' needs.

Opining for a moment here ...

I've had the privilege of working for a few years at a position that kind of straddles the line between data scientist and software engineer (though I was technically a data scientist), and part of that job was mentorship and training. Getting good code out of data scientists and software engineers can be tough to do. I've seen nearly as much messy, uncommented, unformatted, unoptimized code from engineers as I have from data scientists, it's just that when I make recommendations to data scientists they'll actually listen to me.

I'm just lucky engineers started finally using the internal libraries I maintain rather than their own questionable alternatives (though if I never have the "why aren't you pinning exact dependencies for your library? My code broke!" discussion again it'll be too soon.)


Would you have reccomendations / pointers for this?

I straddle both worlds - I’m much more on the tech side, but sometimes interact with scientists / MATLAB codebases.

What data science skills / methdologies would be useful for me to learn?

P.S. And what did you mean by MATLAB to C++? That was a specific time frame (I suppose in the early 2000s ish) when C++ was taught to scientists in the hope they’d be able to productionize their MATLAB code? With not great results (i.e. C++ learning curve + lack of software engineering skills…?) Thanks!


From my experience in the fields of DSP / Data / AI during the last 10 years, issues arise when product teams are segregated into jobs (one guy for initial prototype, then one guy to prep the integration, then one guy for the ops side of things, etc.): people need to be interested and involved in the product they are building end-to-end! Yes this is more demanding, yes this requires perpetual training, but gosh it is rewarding!

My take (non-exhaustive) with the current ecosystem is to apply Agile and DevOps methodologies: - Use Git everywhere all the time, always - Use Jupyter early one: great for quick prototypes & demos, keynotes, training material, articles - Once the initial prototype is approved, archive Jupyter notebooks as snapshots - Write functional tests (ideally in a TDD fashion) - Build and/or integrate the work into a real software product, be it in Python, C++, Java, etc. - Use tools for deterministic behavior (package manager, Docker, etc.) - Use CI/CD and GitOps methodologies - Deliver iteratively, fast and reliably

And by "MATLAB to C++" was a reference to a time (2010's) when corporation were deeply involved with MATLAB licenses, could not afford to switch easily to Python and lots of SWE with applied math background had to deal with MATLAB code written by pure math guys without any consideration for SWE best practices and target product constraints. Nowadays, if the target product is also in Python, there is way less friction, hopefully :)


What’s your recommendation in terms of tooling for cases where it’s not just prototype -> production, but an iterative process? I love notebooks for prototyping, but I find it’s a lot of work to make sure notebook code and prod code are in sync. Maybe just debugging with IPython?


When you've "productionized" a part of your notebook into a Python module, refactor your notebook to use the module instead. Usually, the notebook code will shrink by 80% and will switch to model documentation and demo.


Yeah, that’s basically what I do, but I often find I need to play around with intermediary data within functions.


I create my own classes for this. (Essentially to do the same thing as sklearn pipelines, but I like creating my own classes just for this debugging/slowly expand functionality reason.) Something like:

class mymodel(): ... def feature_engineering(self): ... def impute_missing(self): ... def fit(self) ... def predict_proba(self) ...

Then it is pretty trivial to test with new data. And you can parameterize the things you want, e.g. init fit method as random forest or xgboost, or stack on different feature engineering, etc. And for debugging you can extract/step through the individual methods.


This is a blind guess here, but if you need to inspect the inner data of your function after writing it, it might mean the scope is too broad and your function could be split?

This is where standard SWE guidelines could be of help (function interfaces, contracts definition, etc)


> Data scientists need to be trained with software engineering skills, software engineers need to be trained with data science skills

That's kind of like saying the solution to liability issues arising in the practice of medicine is for physicians to learn lawyer skills and lawyers to learn physician skills.

It's a great idea, if you ignore the costs to get people trained and the narrowing of the pool for each affected profession.

Heck, it's hard enough to get software engineers who work almost entirely on systems where a major part of the lifting is done by an RDBMS to learn database skills.


Yeah training data scientists seems like the answer but in reality it’s just not feasible most of the time. Data science is really hard, and good engineering is really hard. Very few people can do both well.


Scientific code in the Python ecosystem is horrible in general. Architecture astronauts, stack overflows from recursion, bloat, truncating int64 in casts to double, version incompatible mess due to the latest features at all costs. I have seen it all. They treat Python as if it were a sound language with the guarantees of Haskell.


Are there good examples (established libraries or projects) of scientific code in Python? And more broadly, would you have examples of what good scientific code could/does look like?

Not being facetious! I’m genuinely curious and would love to learn more. Thanks


https://github.com/scverse/scanpy

This is incredibly popular in single cell analysis


The Scipy project is a good example, with code from many scientific domains.


What other antipatterns are there? I can understand the criticism of having configuration code with business logic, and think there should be better commenting/docstrings.

But other then that, thought it was fine. The use of type hints is pretty awesome, in particular. Do you guys just not like higher-order functions?


Type hints in that code are erasing the actual signature of the function, so you won't get type checks on the arguments. For @redirect, you'd be better served by using the logging module.

Higher order functions are ok, but the problem with this what they're using it for. Code that behaves different based on environment without any explicit warning for the caller? That's fairly dangerous.


> Type hints in that code are erasing the actual signature of the function

True enough. For the reader of these comments, it's possible (but non-obvious) to properly respect type hints in a decorator. See here: https://mypy.readthedocs.io/en/stable/generics.html#declarin...


I'm so glad I'm not the only one. I was reading the code thinking this seems like a different language to the one I use (albeit I'm not too experienced).

That seems obtuse to me in the extreme. Maybe I'm just not used to that style however.


It's example code in a blog post.


Unironically that means it will end up in hundreds if not thousands of production codebases.


I'm not sure it's framed like that, it's specifically talking about production patterns


'production' for many kinds of data analysis is not the same kind of production as for, say, a web app or service. You can have a production environment for running ad hoc jobs.


That seems a little unlikely since implementing business logic is not typically what a data scientist does.


Not as far as the data scientist in question is concerned, anyway.

I'm reminded of that joke:

Business logic (n): your stuff, which is shit, as opposed to my code, which is beautiful

I say this as a machine learning engineer


The first example illustrates perfectly why I dislike explicit typing:

    primes: set[int] = set()

    candidate: int = randint(4, domain)
It's just so ugly and redundant.

My feeling is that redundant stuff like type hints make programmers create more bugs because they don't see the forest for the trees anymore.

As if you wrapped the driver of a car in so many seat belts that they cannot see the road anymore.


> My feeling is that redundant stuff like type hints make programmers create more bugs because they don't see the forest for the trees anymore.

I've been using type hints heavily and it's been such a great help in avoiding bugs and also faster completion in IDEs. Most of the time, with correctly typed functions you don't have to write explicit types for variables. It's a little bit more code, yes, but it's definitely worth it for any codebase that grows more than a handful of files. Even for smaller ones I have found it very useful.


> I've been using type hints heavily and it's been such a great help in avoiding bugs and also faster completion in IDEs

Agreed. I've been using type hints for the past few years now even though most of my projects are fairly small (< 10,000 LOC). I wouldn't say they've prevented a huge amount of bugs, but they've definitely prevented a few, and I think they've helped in library design and documentation.


Type hints are great at helping understand intent. A lot of bugs happen from understanding a code signature but not understanding intent. In a language like Python where types are all loosey goosey it's easy to write subtle bugs by misunderstanding intent.


I don't exactly know what Pycharm does but I only add type hints when Pycharm's default type checker raises a complaint or is unable to infer the type. To me that's just the perfect middle ground.

Sometimes adding a type to a single object (e.g. `df: pd.DataFrame = ...`) only once at the top of a 20-50 line function, is usually sufficient for Pycharm to infer all other types downstream without ambiguity.


> My feeling is that redundant stuff like type hints make programmers create more bugs because they don't see the forest for the trees anymore.

Not ones that ship to production. Some developers have a terrible time trying to get through the mypy errors. But from my observations, I haven't seen any typing related errors in a very long time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: