Hacker News new | past | comments | ask | show | jobs | submit login
Python, unlike C, has the mod operator always return a positive number (twitter.com/id_aa_carmack)
209 points by tosh on Dec 29, 2021 | hide | past | favorite | 169 comments



The submission is a tweet which doesn't really have a title, so the submitter was forced to make up one. Unfortunately the assertion in the chosen title is not correct. In Python the mod operator returns a number with the same sign as the second argument:

  >>> 10%3
  1
  >>> (-10)%3
  2
  >>> 10%(-3)
  -2
  >>> (-10)%(-3)
  -1


I was going to mention this, having literally just run into this behavior yesterday.


Thank you for highlighting this.


My understanding from college was that C doesn't have a modulo operator, if you look at the spec (according to my professor (according to my memory)) it's called the "remainder operator". Modulo and remainder are not the same operation, so both python and C are correct


FWIW there's an entire paper on the subject of these operations: https://core.ac.uk/download/pdf/187613369.pdf

(well a report in TOPLAS really, but that's pretty close)

There are technically 5 different versions of this operation, though only 2 (possibly 3) are in common use. Common Lisp seems to implement all of them except, somewhat oddly, the euclidean version (the "actual modulo"), its mod function is instead a flooring division.

edit: it looks like Python also follows flooring, not euclidean, meaning the quotient is rounded downwards, and per "a = nq + r" the remainder has the sign of the divisor:

    >>> 5 % -2
    -1
    >>> divmod(5, -2)
    (-3, -1)
The C version is the "truncating" division, meaning the quotient is truncated (rounded towards 0).


Somewhere out there in the universe, there's an nightmare that starts with "Well, zero is kind of an unusual number, so for safety sake we've defined our division to always round away from zero. This resulted in an interesting mod operation..."


Ponylang.io - There is a similar operation (toward zero)

https://stdlib.ponylang.io/builtin-Integer/#div_unsafe

and, in Ponylang, i/0 usually results in 0.


I think this is a bit different. Pony's div is total, and in order to be total div by 0 is 0, which is not a particularly absurd idea.

That said, I do feel that Pony should have added a 'panic' concept and then forced actors to handle panics, but that's me.


> which is not a particularly absurd idea

It's not completely untenable, but it's still kinda janky. I've never heard of any mathematical argument that justifies division by zero yielding 0; contrast 0^0, which at least has an argument for yielding 1: http://mathscitech.org/articles/zero-to-zero-power


From programming perspective, there's nothing sufficiently preposterous in the idea of dividing by zero that would require bringing the whole world to a low-level halt just for attempting that by default. You tried to calculate the average value of what turned out to be an empty list? Go suffer the most severe exception, like the ones reserved for those reading from nonexistant memory.


Another common case is adjusting some value according to a spatial vector's magnitude, like normalizing a vector or scaling gravitational attraction according to the distance between two points.

In the majority of these cases the most reasonable answer is 0.


> I've never heard of any mathematical argument that justifies division by zero yielding 0

0/x = 0 for all x except zero; defining 0/0=0 removes that exception.


x/x = 1 for all x except 0; defining 0/0 = 1 removes that exception.


Split the difference, 0/0 = 0.5


It gets worse: differentiation would be 0/0 if you went straight to the limit rather than just approaching it, so 0/0 can be any differentiable function, not just a value.

(Caveat: I’m probably being slightly imprecise with my language here)


I think it's worth all getting on the same page on the fact that math is fake and stupid. If it's more useful for 0/0 to == 0, that's justification enough imo.



Maybe the best argument is that the average of zero items should be zero?


But then why should the average of zero items be zero? It still seems like just an arbitrary choice to me.


Toward zero is a fairly intuitive way to do it, I think. Intuitively, if a you were, say, discretizing a length (L) with a number of steps (n) of some size (d) and went for d=L/n, you might want to always end up with d*n <= L even if L is negative, right? Although, having an integer for d may not make sense, or you might want to take |L| beforehand anyway...


Good paper—it was the primary source we looked at when implementing these operations in Julia. Specifically, Julia implements all versions and overloads `div` and `rem` to take a rounding mode argument. It also gives special names to a few of them and they come in pairs: `div` & `rem`, `fld` & `mod` (`fld` stands for “floored divide” as opposed to `div` which rounds towards zero). The `%` operator does remainder like C, which was a tough call, but we figured being compatible with C was worthwhile for people porting code. It’s also less useful in Julia due to default 1-based indexing of arrays; for that there is a `mod1` function that returns a value between 1 and n.


> The `%` operator does remainder like C, which was a tough call, but we figured being compatible with C was worthwhile for people porting code. It’s also less useful in Julia due to default 1-based indexing of arrays; for that there is a `mod1` function that returns a value between 1 and n.

After thinking about this, what I really want is a way to access an array with an unadjusted number and tell the array to do whatever modulizing is necessary. e.g. instead of

    a[small]
    a[mod1(big)]
I'd like to have something more like

    a{small}
    a[big]


Hmm. This means we'd have to do mod every time we accessed an array which seems not great, right? But probably it could be optimized away for the common/easy cases.


In WUFFS if you've got a 40 item array of things, an unsigned 8-bit integer n and you want to access things[n] the compiler will conclude n's type must be base.u8[0..40]

Now, having refined the type this way, when you try elsewhere in the code to read some unknown byte into n, WUFFS will reject that because who knows if it exceeds 40? To make the code compile, you need to actually write code that ensures n is always less than 40.

Of course you aren't going to write big general purpose software under this sort of constraint, it's like having a two year old following you everywhere insisting on a detailed explanation of why you are doing every single thing and asking the most irritating "But why?" questions after each explanation, exhausting. But if you're processing untrusted data it's exactly what you needed to stay safe.


Julia already has bounds checking at runtime on array access, so you could do the mod operation only if the bounds check fails.

But I don't get how this would practically work. Would each array have a preferred n that is used for the mod operation? For most situations where that makes sense, using a 2d array seems more logical.


I assume n would be size(a)


Oh yeah - I got mixed up on this and thought we wanted to access sub arrays within a.


The parent comment has brace-syntax as well as the square brackets.

I assume the braces are there for "I want to skip doing the wrapping"


I was thinking that, but they didn't mention it, so I assumed it was just a typo. Could be, though.


I kind of get what you are saying syntactically, but the semantics are weird to me as the C version is only defined for integers in the first place so saying that the operator is truncating something is an awkward mental model.


> I kind of get what you are saying syntactically, but the semantics are weird to me as the C version is only defined for integers

Why? Mathematically, dividing two integers yields a real. If you want the result to be an integer, you need to define some rounding operation on the real result to produce an integer.

That is the source of the divergence between C and Python here, C rounds by truncation (towards 0), Python rounds by flooring (towards -inf).


> Mathematically, dividing two integers yields a real

not true. if you define 'divide' as inverse multiplication over the reals then that is closer to being true (zero is an integer, dividing an integer by zero doesn't yield a real). in that case every real can be expressed as n = a/b where a and b are integers and b != 0 then yes n is a real.

But 'divide' could just as well be defined along the lines of divisibility in number theory (the study of integers) where every integer can be expressed as n = q*b + r, where n,q,b,r are all integers and r >= 0 > b. then you could say that n divide b = q, with r as remainder, and n mod b = r. These are my preferred definition of mod and integer division.


sorry, rational, not real.


Depends on your definition of division. It you choose a definition which is closed (e.g. you’re working in number theory) then division of integers can only produce other integers.


> Mathematically, dividing two integers yields a real

Rational, more specifically.


Oh, I'm dumb. Sorry. (I got the terms for quotient, dividend, and divisor mixed up.)


> Oh, I'm dumb.

We both know that's not true.

> Sorry. (I got the terms for quotient, dividend, and divisor mixed up.)

No big, brainfarts happen.


Just don't divide by zero. That's how you get black holes!


Mathematically, Euclidean division of two integers yields an integer. You don't need to define reals or rounding. Children learn how to do it before they learn about rational numbers.


I invite you to ask your child how he would share three cookies with you. I've been asking variations of that question to all three of my children since they could talk.

Of interesting mathematical consequence, each of my children converged on the same solution at different ages. Them: three cookies. Daddy: zero cookies.


If it's not truncating, how do you describe its behavior when e.g. an odd number is divided by 2?


(Read the rest of the thread and look for my username.)



This one is very concise, great as a reference, you can remind yourself of which negatives go where in an instant.


K&R "The C Programming Language" (1978), p.37:

> The binary arithmetic operators are +, -, *, /, and the modulus operator %.

https://archive.org/details/TheCProgrammingLanguageFirstEdit...

At least, that was the C reference I used in college.


You'll only find that terminology in K&R. The C specification only uses "modulo" to refer to the behavior of unsigned numbers, and uses "remainder" to refer to the behavior of %. This is true of at least several versions of the C standard dating back to 1999, I don't have the C90 spec handy.


Indeed!

The draft version "X3.???-1988" at https://web.archive.org/web/20161223125339/http://flash-gord... (linked-to from Wikipedia) has:

> % modulus operator: 3.3.5

which seems off to a good start for K&R, but then also has:

> remainder operator, %, 3.3.5

and 3.3.5 says "the result of the % operator is the remainder" -- not a whiff of a modulus present there.

Thanks!


People generally watch tutorials, read programming books, blogs, stackoverflow to learn programming, not the standard of a programming language.

You will find "modulus operator" to refer to % in C in all of the above (other than the standard).


That is what most people do... which is why those of us who write tutorials, write programming books, write blog posts, and write Stack Overflow answers have a responsibility to go to the source.

The more something gets repeated from person to person, the higher chance that somewhere in that chain, somebody screwed up. That's why people like professors, people who run YouTube programming channels, people who answer C questions on Stack Overflow, etc. should generally strive to cut down the number of links in the chain.


The fact that tutorials teach incorrect knowledge does not make the knowledge correct.


I think there is the same problem with the conditional operator "?" which a ton of resources name as the ternary operator.

edit: tertiary -> ternary


I know it as "the ternary operator" (used that way in K&R at https://archive.org/details/TheCProgrammingLanguageFirstEdit...), and until about 2 minutes ago had never heard it as "the conditional operator" - even though that's how the spec describes it!


It gets named "the ternary operator" in many languages like C because they only have one ternary operator, it's always that one. In some (I think?) languages there are several ternary operators, and so for them it'd be like if we called C's XOR operator ^ "the binary operator".


Yeah. It's "the ternary operator" the way John Carmack is "the author of the tweet we're discussing". It's a correct description, but it's not his name.


Names are just whatever you call something. For inanimate objects the objects can't care what you call them, so at the very most you're privileging one person's opinion over another if you choose to call it the same thing. For ideas even more so. Ordinarily the most important purpose of the name is to ensure you're talking about the same thing.

Calling it "Ayers Rock" privileges some guy from the 19th century, calling it "Uluru" privileges a bunch of people who lived near it for longer than that, the rock itself has no opinion (and also doesn't believe anything about ritual magic, property law, gender based taboos, or the tourist industry)

Now, John Carmack is a little different because John is a person and it is rude to use names people don't like. But it isn't impossible, it's just rude. Calling the previous President of the United States of America "Tangerine Palpatine" works just fine - we both know who I meant. In fact it's a little weird that our culture has parents assign to their children full legal names, only kinda-sorta making it possible for them to choose for themselves later, the Culture's habit of allowing children to pick one of their names as part of growing up seems better.


This is also exactly what the ANSI ("Second edition") K&R book I have in front of me says.

It goes on to say: "x % y produces the remainder when x is divided by y" and that "the sign of the result for % are machine-dependent for negative operands, as is the action taken on overflow or underflow".


The language in the reference manual at the end of the book (p. 188) is much more specific. It says:

The binary operator / indicates division. When positive integers are divided, truncation is toward 0, but the form of truncation is machine-dependent if either operand is negative. On all machines covered by this manual, the remainder has the same sign as the dividend. It is always true that (a/b)*b + a%b is equal to a (if b is not 0). The binary % operator yields the remainder from division of the first operand by the second.


Interesting. If the equality must hold, then this means that a/b has different values on different architectures depending on whether a%b is positive or negative.


C only pinned this down in C99. In C90, the sign of the remainder was implementation-defined in those cases in question.


So only 22 years ago :)

Are C programmers still using C89 and C90?


There are a lot of C compilers out there, and many of them have not implemented all of C99. Microsoft started to add C99 features in MSVC2013 and still doesn't support the whole spec.

The C compilers supplied by microcontroller vendors in particular are often, to put it charitably, not great at conforming with the latest standards.


IIRC, Microsoft has publicly stated that they only plan to implement the parts of C99 required by the C++ standards.

EDIT: Fact checking myself, I found [1], so apparently they've decided to start adding C11 and C17. C99 support isn't entirely clear, though.

[1] https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...


C99 added some (mandatory) features that had some issues with implementation, most notably variable-length arrays; C11 made those features optional instead. MSVC is unlikely to ever support VLAs (as noted in the blog post you linked).

A parallel can be drawn to `export template` in C++98, which is a feature that was only ever implemented by a single compiler (whose engineers, when asked for recommendations to others attempting to implement, offered "don't"). As a result, it was dropped from later versions of the standard, and so almost every compiler doesn't actually support C++98 100%, but no one cares because the things that it doesn't implement doesn't matter. VLAs aren't quite as universally maligned as `export template`, but given that it's dropped to optional in later versions, it's not totally unreasonable to claim implementation of all of C99 that matters while not supporting VLAs.


VLAs are considered a huge mistake in security coding, to the point that Google sponsored the work to clean the Linux kernel of them.


How recent is the -Wwla warning in GCC? Because it seems that would easily find them for you, and -Werror=vla prevent them from creeping in.

Answering my own question: by doing a binary search on the online GCC documentation, I discovered that it's not documented up to 4.2.4. The next version up for which online documentation is hosted is 4.3.6, and that has -Wvla.


Assuming all source code is available, the code is never migrated to other compilers, and every company in the world would actually bother to use such warnings when using GCC.

Naturally removing them from ISO C was the saner option.


But they are forever in C99, which GCC supports.

VLA's crept in to the kernel, which is what this thread is about.

With -Wvla, you can learn about where all the VLA's are being declared in the code base.

Also, don't forget that VLA's are a GCC extension which existed before C99. They are available in the -stdc=gnu89 mode, and -ansi (without -pedantic).

They are available in -ansi mode (diagnosed if you use -pedantic).

They were not "removed" from ISO C; they are optional. If you implement VLA's, there is an ISO-C-conforming way to do that.

  $ cat vla.c
  int fun(int x, int a[x]) { return 0; }
  int main(void) { return 0; }
  $ gcc -ansi vla.c
  $ gcc -std=gnu89 vla.c
  $ gcc -ansi -pedantic vla.c
  vla.c:1:1: warning: ISO C90 forbids variable length array ‘a’ [-Wvla]
  int fun(int x, int a[x]) { return 0; }
  ^~~
  $ gcc -std=gnu89 -pedantic vla.c
  vla.c:1:1: warning: ISO C90 forbids variable length array ‘a’ [-Wvla]
  int fun(int x, int a[x]) { return 0; }
   ^~~
This warning is now linked to [-Wvla] rather than just [-pedantic], so you can have -pedantic on, yet allow VLA's without a diagnostic.

  $ gcc -std=gnu89 -pedantic -Wno-vla vla.c
  $ gcc -ansi -pedantic -Wno-vla vla.c
Likewise you need -Wvla to police your code against VLA's creeping in, even if you're compiling in -ansi/-std=c90 mode. Traditionally, -pedantic will do it, but it's too strong; it rejects other things you might want. For instance, in the gnu89 dialect, you can initialize local structs and arrays with non-load-time computable expressions, which is complained about with -pedantic.

Dynamic allocation from the stack is something that is quite essential in some situations, and a more efficient approach compared to other alternatives in some other situations.

alloca never goes away, and VLA's are better than alloca in many ways.


Apparently Google's security team thought otherwise.

You can discuss your point of view with the folks that cleaned the kernel.

https://lore.kernel.org/all/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK...

Notice that they are fully aware of that warning.


Right. As I dug up, the warning was introduced sometime in 4.3, which could be as far back as 2011. It looks like it was helpful in hunting down the VLA's.

I don't know what point of view you're talking about; what the are discussing in the mailing list are silly uses of VLA that can be replaced by small arrays of fixed length sized to the worst case. These perform better, too.

This is not always practical in every situation.

C itself dynamically allocates from the stack. When a function is called, the increment in stack usage depends on which function. (If recursion is going on, it may not be easy to know the worst-case stack size, even if every function has a static frame size.)

Anyway, similarly, suppose we have used C to write a virtual machine. Suppose the virtual machine allocates function locals and temporaries on the native stack. When a VM function call is processed, the VM (a C program) has to allocate a frame for that. a VLA or alloca can provide that easily, with minimal waste.


As Linus puts it,

> Anyway, some of these are definitely easy to just fix, and using VLA's is actively bad not just for security worries, but simply because VLA's are a really horribly bad idea in general in the kernel.


Deep recursion is a bad idea in the kernel too; that doesn't mean it's a bad idea.

Code running in the kernel has a limited stack space. I think it's tiny, something like just a few pages.

Interrupts (which can nest) run in that stack space, not just the mainline code in syscall context. I think that might have changed though (there are interrupt specific stacks), which might also depend on the specific arch port of Linux. Still, those stacks are small and all the caveats apply to interrupt context. Don't be searching a linked list with linear recursion or using crazy VLA's.

If I had to put virtual machine execution into the kernel, though, I would definitely use VLA's or alloca to get the space required for each function to be as tight as possible to its actual requirements, the same way that the C compiler emits code for each function to allocate its frame size to exactly the needed size. Right tool for the right job.


HN discussion about it from 2018 (100 comments):

https://news.ycombinator.com/item?id=18323938



There are C99 (and beyond)_ compliant compilers for windows, aren't there?

Why insist on MSVC when it's not one of them?

(Genuine question. I remember python was c89 for similar reasons).


It’s only recently (as in the last decade) that a drop-in replacement for MSVC’s cl.exe came along that’s ABI compatible and accepts the same CLI arguments: https://clang.llvm.org/docs/MSVCCompatibility.html

Incidentally, that’s around the same time MSVC started getting updates and became bearable.


I still don't get it.

Why is a drop in replacement for MSVC needed?

There were c99 compilers for windows before clang IIRC correctly. So why weren't they used, MSVC be damned?

Full disclaimer, I've only ever used C in linux systems.


Oh there were lots of other compilers. Hardly anyone used them because they didn't work with any existing tooling and IIRC most of them didn't even let you to link the system's UI libraries due to ABI issues and unsupported __stuff in headers. Sure you could work without tooling like a caveman and you could print text to stdout using only the libc you brought with you, but most programs need to do a bit more than that.


MSVC for many years ignored C completely, with the argument that C++ was the successor. Now it seems they turned on their heel with C11 and C17 support added last year, https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...


Somehow I get to wonder that WSL and Azure Sphere aren't completely alien to that decision.


Yes, Linux uses gnu89


.. and that allows plenty of silently accepted extensions, including variable-length arrays, which were in GNU C long before C99.

It seems that only GCC 4.3 added the -Wvla option to unbundle the diagnosis of the presence of VLA's from -pedantic.

Now you can write in gnu89, while banishing VLA's from your code base, if you're so inclined.


> Modulo and remainder are not the same operation, so both python and C are correct

That just makes C look worse. Remainders are always nonnegative, as specified in the Division Algorithm [note that the Division Algorithm is the name of a theorem, not an algorithm].

https://en.wikipedia.org/wiki/Euclidean_division

Also, I'm not aware of a definition of "modulo" other than "remainder". Where are you getting the idea that they're different?

For example, https://en.wikipedia.org/wiki/Modulo_operation explicitly defines "modulo" as "remainder".


Yep. I remember that for my first C assignment in college we had to write a calendar, and leap year calculations needed modulos. I remember distinctly that I tore my hair out trying to figure this out, especially because I was prototyping my math in Python. I eventually resolved that I needed to constantly ensure my sum was > 0 (by adding my modulo base).


That will only work for cases where a >= -b. For the more general case:

> a mod b = ((a % b) + b) % b

The first remainder bounds it to -b < x < b. Then adding b goes to 0 < x < 2b. Then the second remainder gets th desired 0 <= x < b.

This will not work for b > max_int / 2 (solution in that case is much mkre language dependent)


What's the case where the C % operator behaves in an unexpected way? I've been wiring software for about 20 years and can count zero hands the number of times someone I've worked with has had to worry about how the modulo operator behaves with negative numbers.


An example would be if you want to find the month corresponding to a timestamp. The obvious thing to do is to scale the timestamp so that it gives you the number of months after the epoch, take that number % 12, and map 0 to January, 1 to February, etc. (assuming the epoch is in January). But if you try to do that with a timestamp from before the epoch, you'll end up with 0 for January, -11 for February, -10 for March, etc.


No. If your software has to deal with dates from before 1970 "seconds since the epoch" is not a reasonable choice for a time unit.


Haskell provides two functions for integer division, div and quot, and two functions for modulus, mod and rem. quot rounds towards 0, and div rounds towards negative infinity. rem is quot's modulus, and mod is div's modulus.

mod is Python's %, rem is C's %.


In Rust, the integer types [even the unsigned ones where it makes no difference] provide rem_euclid() that gives you a non-negative answer, whereas % is defined to have the same sign as your dividend if it isn't zero.

Having this for unsigned types allows you to write rem_euclid() if that's definitely what you meant even when you aren't sure if the type you'll be using here will be signed (and thus it matters) or not.

Unfortunately Rust's std::ops::Rem trait (the trait which provides the overloadable % operator) does not offer this alternative, so if all you know about a type is that you can apply the modulus operator to it, you only get that operator and there's no uniform way to get a Euclidean remainder instead.


Scheme is the same, since at least r4rs, which came out in 1991. Not having it is, I would argue, bad taste.


It's a bit different:

Scheme:

  > (mod 5 2)
  1
  > (mod 5 -2)
  1
  > (mod -5 2)
  1
  > (mod -5 -2)
  1
  > (remainder 5 2)
  1
  > (remainder 5 -2)
  1
  > (remainder -5 2)
  -1
  > (remainder -5 -2)
  -1
Haskell:

  Prelude> a = 5 :: Integer
  Prelude> b = 2 :: Integer
  Prelude> mod a b
  1
  Prelude> mod a (-b)
  -1
  Prelude> mod (-a) b
  1
  Prelude> mod (-a) (-b)
  -1
  Prelude> rem a b
  1
  Prelude> rem a (-b)
  1
  Prelude> rem (-a) b
  -1
  Prelude> rem (-a) (-b)
  -1
So in Scheme mod is the Euclidean modulus (always >= 0) and remainder takes the sign of the dividend (comes from truncated division, same as C's %). In Haskell mod takes the sign of the divisor (floored division, same as Python), while rem behaves the same way as Scheme's remainder or C's %.

I prefer Scheme's way, because it was what we always used in classes when I studied maths. mod returning a negative number is just too weird.

EDIT: It appears that Scheme also has modulo, which comes from floored division:

  > (modulo 5 2)
  1
  > (modulo 5 -2)
  -1
  > (modulo -5 2)
  1
  > (modulo -5 -2)
  -1


R6RS specifies div, mod, div0 and mod0 and specifies their semantics.


The FORTH-83 standard adopted floored division.

https://wiki.forth-ev.de/doku.php/projects:signed_integer_di...

Lots of other languages got it wrong:

https://en.wikipedia.org/wiki/Modulo_operation#In_programmin...

Symmetric division is the kind of thing that causes rockets ships to explode unexpectedly.

Symmetric division considered harmful:

https://www.nimblemachines.com/symmetric-division-considered...

>Since its 1983 standard (Forth-83), Forth has implemented floored division as standard. Interestingly, almost all processor architectures natively implement symmetric division.

>What is the difference between the two types? In floored division, the quotient is truncated toward minus infinity (remember, this is integer division we’re talking about). In symmetric division, the quotient is truncated toward zero, which means that depending on the sign of the dividend, the quotient can be truncated in different directions. This is the source of its evil.

>I’ve thought about this a lot and have come to the conclusion that symmetric division should be considered harmful.

>There are two reasons that I think this: symmetric division yields results different from arithmetic right shifts (which floor), and both the quotient and remainder have singularities around zero.

>If you’re interested in the (gory) details, read on. [...]


Interesting article, thanks. I've created a thread for it: https://news.ycombinator.com/item?id=29751749


% has always been remainder operator and mistakenly referred to as the 'mod' operator:

"The expression x % y produces the remainder when x is divided by y" K&R C p37


Modulo, in this context, is defined as the remainder when dividing two numbers, and mod is just short for modulo. Is your objection that the term modulo didn't originally have this meaning? That aside, perhaps the % operator should be pronounced "rem(ainder)" too avoid unnecessary jargon.


When trying to understand C, it's always useful to look at what the pdp-11 did. the pdp-11 didn't have a mod instruction; but its div instruction can be used instead. Div can only take an even numbered register as its dest. That register holds the quotient. The odd numbered register numbered one greater then dest holds the remainder, which for a negative number will naturally be negative. so div easilly be used to calculate a mod-just ignore the quotient, but you'll get a negative value.


Doesn’t C just do whatever the underlying hardware does for modulo w negative numbers? I have always worked around this by adding N to numerator:

    r = (i + N) % N
which handles cases where i >= -N which is mostly what I run into when i is negative.

I remember Ada having two different operators REM and MOD which I think is the best language choice.


In C this used to be “implementation defined”. That changed in C99 and C++11 (https://en.wikipedia.org/wiki/Modulo_operation#In_programmin..., note c)

Nitpick: it never was “just do whatever the underlying hardware does”. C standards try hard to avoid mentioning hardware at all (and, as far as I know, succeed at that), and there’s plenty of hardware that doesn’t do mod in hardware, yet supports it in C.


In C there are ton of things that are hardware dependent. For example the size of every data type is not specified, or the fact that char is signed or unsigned, the representation of signed integers (tough I don't think there still exist a platform that doesn't use 2's complement), ad a ton of other things.


I think there parent comment just meant that implementation dependent is technically not the same as hardware dependent.

For example, the size of int is not specified by the standard, as you said. But it can still be different from the natural word size of the underlying hardware, if the compiler vendor chooses to make it so. Case in point is int on 64-bit platforms is usually 32-bit in practice. In Microsoft's compiler, even long int is 32-bit even on 64-bit hardware.


You make a good point, but I think he was more generalizing to actual implementations of C compilers. And that does align with my experience.


Ada mod takes the sign of the second operand.

I wouldn't even bother with this, I'd just use a modular type (i.e. unsigned).

    with Ada.Text_IO;
    
    procedure Main is
       -- Modular type (unsigned) which can take values of [0, 15].
       type Nibble is mod 16;
       Value : Nibble := 0;
    begin
       -- Prints 0 3 6 9 12 15 2 5 8 11 14 1 4 7 10 13 ...
       for I in 0 .. 63 loop
          Ada.Text_IO.Put_Line (Value'Image);
          Value := Value + 3;
       end loop;
    end Main;

Then make that type the index into my circular buffer and then I never need to worry about it.

    type Circular_Buffer is array (Nibble) of Integer;
    Buffer : Circular_Buffer;


In C99, the modulo operator is defined thusly: “If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a”

Source: The draft C99 standard http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf PDF page 94, physical page 82.

Yes, as someone who both gets open source bug reports and has sent out open source bug reports, what determines whether a behavior being looked at is a bug depends on what the relevant standards say.


Meanwhile, C89:

https://port70.net/~nsz/c/c89/c89-draft.html#3.3.5

> When integers are divided and the division is inexact, if both operands are positive the result of the / operator is the largest integer less than the algebraic quotient and the result of the % operator is positive. If either operand is negative, whether the result of the / operator is the largest integer less than the algebraic quotient or the smallest integer greater than the algebraic quotient is implementation-defined, as is the sign of the result of the % operator. If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a .


Between C89 and C99, the standard was updated to require integer division to truncate towards 0.

C99 draft standard: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf PDF page 94, physical page 82, “When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded.” with a footnote: “This is often called ‘‘truncation toward zero’’”

I wish % was a modulo instead of a remainder operation; this bug has bitten me more than once.


> I remember Ada having two different operators REM and MOD which I think is the best language choice.

According to Boute 1992,

> The Ada Reference Manual complements integer division (/) with two functions mod and rem. The first of these violates the division rule

so it doesn't so much have two different operators as one operator and one broken mess, and its one operator is the truncating division (C-style).

Also a quick check of the Hyperspec (or, again, Boute 1992) shows there are not two but 4 definitions based on rounding quotients, I think none of which "always returns a positive number" (which Python also does not do, incidentally).

There is a 5th definition, which Boute 1992 champions, for which the property asserted in the title holds.


Btw that code won’t work in the general case (where i could be less than -N). For that case you should probably do

r = ((i % N) + N) % N

There may be faster ways, but that should at least be correct for all inputs.


Incorrect for N > INT_MAX/2.


I'm not sure I follow.

If defined as

```

int16_t mod(int16_t i, int16_t N) {

     return ((i % N) + N) % N;
}

```

There do not seem to be any issues all the way up to INT16_MAX. Even with negative values of N, it produces the same value as python does (although I am not certain I understand what modulus means with a negative N)


Let i=INT_MAX-1, N=INT_MAX.

i%N is just i, and then (i%N)+N overflows. The result is undefined. Even on systems where it is defined, the typical result would be incorrect.

It’s a bit weird that you chose int16_t for your example… it’s the kind of thing you might do if you were trying to make some point about different integer types, but you didn’t give a reason for using int16_t, so the point is lost. I can’t read your mind.


I chose int16 just to make it easier to write an exhaustive search. That is where my assertion that the code works correctly comes from.

The case you suppose makes sense, but for some reason does not produce incorrect results when I run it, for instance:

  N = INT16_MAX - 2:
  mod(32762, 32765) = 32762  
  mod(32763, 32765) = 32763  
  mod(32764, 32765) = 32764  
  mod(32765, 32765) = 0  
  mod(32766, 32765) = 1  
  mod(32767, 32765) = 2
I wonder if the compiler just used the full register. I'll have to check the disassembly.


INT_MAX and INT16_MAX are not interchangeable.

> I wonder if the compiler just used the full register. I'll have to check the disassembly.

No need, you'd figure this out just by knowing what the C spec says and by knowing what the value of INT_MAX is. When you compute the remainder in C, the operands first undergo "integer promotion" and then go through the "usual arithmetic conversions". The result is that you end up with two values of the same type, no narrower than int.

So in C, the following function:

    short broken_modulo(short x, short y) {
        return ((x % y) + y) % y;
    }
Is equivalent to the following, with the implicit casts made explicit:

    short broken_modulo(short x, short y) {
        int ix = (int)x;
        int iy = (int)y;
        // All int, no short.
        int result = ((ix % iy) + iy) % iy;
        return (short)result;
    }
Note that this conversion happens in ALL C implementations, because it is mandated by the spec, and no implementation-defined stuff is happening here. Do note that it is possible for int to be 16-bit, e.g.,

    // Possible.
    typedef int int16_t;
It's just uncommon to see that in this millennium. I used "int" and "short" above rather than "int16_t" because the exact rank of "int16_t" vs "int" with regard to arithmetic conversions is implementation dependent, but the rank of "short" vs "int" is not implementation dependent.


Thanks for explaining!


Not if i is less than -N. Modulo operator that wraps -1 to N-1 would be waay better.


> I remember Ada having two different operators REM and MOD

Taken from Modula-2.


So when Rust implemented this they chose to leave the % operator like in C and hide the one that is more useful for humans behind `rem_euclid`. This is unfortunate but at least it exists.

The RFC that introduced rem_euclid: https://github.com/rust-lang/rfcs/blob/master/text/2169-eucl...


> So when Rust implemented this

For what it's worth, Python does not implement euclidean modulo (it implements flooring division aka "F-definition", proposed by Knuth), so Rust did not implement "this" it implemented an operation which, as far as I can tell, Python doesn't have.


The difference between flooring and euclidean modulo is quite irrelevant in practice for the most part. Either way resolve the main issue with truncating modulo that is the issue outlined in the tweet.


> The difference between flooring and euclidean modulo is quite irrelevant in practice for the most part.

It seems quite relevant as it diverges significantly from the assertion put forth in the tweet: Python's remainder takes the sign of the divisor. So flooring division fixes the more common issue but leaves an other one present.


The tweet we're discussing:

> python, unlike C, had the mod operator always return a positive number, even if the first argument is negative

That is true for Rust's rem_euclid as well: https://play.rust-lang.org/?version=stable&mode=debug&editio...


In practice, when you use a modulo operator the second operand is usually some kind of length or size, which is inherently non-negative, while the first operand is an index or offset, which could easily be negative if there's no natural origin point. So it's much rarer that you have to deal with a potentially negative second operand, compared to having to deal with a potentially negative first operand.


The IEEE754 remainder function even beats C in that it can return a negative value even if both inputs are positive. IIRC the logic that leads to this is:

    remainder(a, b)
    n = round(a/b)
    return a - n * b
Because it rounds to get n, n * b can be greater than a :D

Even more fun as the original x87 implementation preceded IEEE754 its remainder operation (fprem, because it's technically a partial remainder) used truncation rather than rounding as that's the obvious behaviour. To support the IEEE754 specification they had to add fprem1 which performs a rounding operation instead. Hurrah! :D


What a beautiful way to shoot yourself in the foot


When I first encountered it it sent me down a path of paper reading that made me sad.

Even better, because it's defined in terms of rounding, it changes according to rounding mode :D


Ruby's % operator behaves the same way, for both integer and float values. A positive-only modulus function is useful for wrapping angles. This bit me a week or two ago when I was porting audio algorithms from Ruby to C, and had to implement a positive modulus function[0].

[0] https://github.com/mike-bourgeous/mb-sound/blob/a8eb1232ae35...


Let’s look at the free draft version of C99 ( http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf ). On PDF page 94 (physical page number 82), we cover integers:

If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a

For division:

The result of the / operator is the quotient from the division of the first operand by the second

Which is remarkably vague. Presumably, they are doing the kind of third grade math where we get a quotient and a remainder (Euclidean division). So, 1/3 is 0, 2/3 is 0, 3/3 is 1, 4/3 is 1, 5/3 is 1, 6/3 is 2, etc.

That in mind, let’s look at -1 % 3 (is 2 when doing a proper modulo operation)

With integers:

(-1 / 3) is 0 (1 / 3 is “0 remainder 1” and multiplied by -1 still gives us 0)

This in mind, the modulo (-1 % 3) needs to give us -1:

(0 * 3) + -1 = -1

So a%b is negative here.

There are some ways to work around this and get a proper modulo.

One common case is to get something modulo a power of 2. This will give us a modulo, 20 in this case, as per the C standard:

  #include <stdint.h>
  uint32_t foo = 12;
  printf("%d\n",-foo % 32);
The reason is because, as per PDF page 55 (page 43) of that draft C99 standard:

if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.

So, since foo is unsigned, and a 32-bit int, -foo is -12 + 4294967296, or 4294967284. 4294967284 % 32 as per the spec above is 20, i.e. -12 mod 32.

This trick only works if we modulo a number by a power of 2.

Yes, I have used this trick when writing “golfed” C code which needs to do a circular rotate:

  uint32_t r=12,g=1;g=g>>r|g<<-r%32;


I was reading Crafting Interpreters about a month ago and thought of writing a fizzbuzz in the language taught in the book. The language doesn't have a mod operator so I wrote function for that. I tried testing it against python's mod, but it didn't match where different signs were involved. I thought operands should be treated as absolute values and the resulted be negated when two different signs are involved. Then I found that python did it differently from something like C.


I know why it implements that way. But what baffle me the most is "What is it actually useful for?". I never encounter a situation that I want % to return a negative number. And I didn't see anyone do it either. Situations I encountered are either "they are always positive so it doesn't matter" or "I want a positive one". Do anyone have example about when will it be useful?


C's version is a byproduct of this being easier for hardware to calculate, stemming back from the time, long ago, where that was actually a relevant concern.

Unfortunately hardware never fixed this issue, and C is never going to expose an operator that doesn't map primitively to the hardware, so it became the established default.


An amusing memory from grade school is that we stopped using remainders before we started using negative numbers. So if you were to ask me how a remainder operator should behave on negative numbers, I'd be at a loss. I don't think it was ever defined in any of my math classes, through grad school.


Careful what you wish for. What should -7 / 2 return? -3 kind of makes sense, because 7/2 is 3.

It also desirable to have the remainder consistent with the result of division, such that x = d*r + mod, where r is the result of division. Combine that with the above requires negative mods.


Moving past the intuition for how division and remainders work for real numbers, there are other ways to look at "obvious" answers that suggests -7/2=4.

Consider x/3=y. For real numbers, it's trivial to manipulate this equation into (x-3)/3=y-1. However, this only works when integer division rounds down.

   5/3 =  1 rem 2  or  1 rem 2
   4/3 =  1 rem 1  or  1 rem 1
   3/3 =  1 rem 0  or  1 rem 0
   2/3 =  0 rem 2  or  0 rem 2
   1/3 =  0 rem 1  or  0 rem 1
   0/3 =  0 rem 0  or  0 rem 0
  -1/3 = -1 rem 2  or  0 rem -1
  -2/3 = -1 rem 1  or  0 rem -2
  -3/3 = -1 rem 0  or -1 rem  0
One of these has a consistent pattern, the other doesn't.

All of this is a fairly moot point: The answer is it should do what the algorithm using it needs it to do.


True, but both Python’s “int()” operation and the C99 and C11 standard for division operators truncate towards 0. [1]

[1] e.g. http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf PDF page 94/physical page 82


Python's int() truncates towards 0, but (at least in Python 3, I don't know if it was always the same in older versions) its division operator always floors, so -7 // 2 gives -4 even though int(-3.5) is 3. (The int() function is best thought of as just "make this float into an integer, I don't really care how"; if you want to be more specific, you have math.floor(), math.ceil() and math.trunc(), and you also have round(), which rounds towards the closest integer; if the float is evenly between the two integers, it goes towards 0 when the integer part is even and away from 0 when the integer part is odd.)


It is not so much my intuition but the intuition of the language designers. I would prefer euclidian division personally.


It's highly desirable for -7/2 to equal -4, since this lets integer division by powers of two be exactly equivalent to a two's complement arithmetic right shift. In a C mindset, rounding toward zero is pretty much an objectively wrong holdover from before we knew better.


In my experience, when looking at how division should operate for a given well-established language, it doesn’t matter what you and I think should be the result of the operation; it matters what the standard says on it.

From PDF page 94 (physical page 82) of the draft C99 standard: [1] The result of the / operator is the quotient from the division of the first operand by the second

This is a little vague; it would be better if they called it the “quotient from Euclidean division” (EDIT: This was incorrect. Actually, it’s the quotient from algebraic division, with the fraction truncated, also called “truncation towards zero”. See discussion below.), which is well defined as being -3 in the above example; see https://en.wikipedia.org/wiki/Euclidean_division (EDIT: Euclidean is -4; truncation towards 0 is -3)

[1] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf


it doesn’t matter what you and I think should be the result of the operation; it matters what the standard says on it

In practice, this offloads the problem to your programmer in at least every other case (i.e. when % doesn’t what they need out of box), who then miscalculates or mistests their solution and now the problem is yours.

It doesn’t matter what the standard says, there should be many functions, each for a combination of signs (or roundings) you want in the results. Because your library doesn’t suck and your programmer has things to do instead of fiddling with off by ones and other primitive but highly error-prone-under-stress bs.


Euclidian division actually gives -7 = -4*2 + 1


It’s -3 in GCC 11.2.0

  main() { printf("%d\n",-7/2);}
Let’s look at, oh, C11: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf

PDF page 110, physical page 92: When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded. with a footnote: This is often called ‘‘truncation toward zero’’

-7/2 is -3.5, so discarding the fractional part gives us -3


Yes, and this is not Euclidean division in the usual sense.


Indeed. The C99 and C11 standards define integer division as the “algebraic quotient with any fractional part discarded”, not as Euclidean division.


This whole thread is a nice review and excersize of the 2nd chapter of my Computers and networks security course.


> Careful what you wish for. What should -7 / 2 return? -3 kind of makes sense, because 7/2 is 3.

-4 also makes sense.

> It also desirable to have the remainder consistent with the result of division, such that x = d*r + mod, where r is the result of division. Combine that with the above requires negative mods.

Python's behaviour is consistent with that definition:

    >>> divmod(-7, 2)
    (-4, 1)
-7 = -4 * 2 + 1


Actually, I vaguely recall that if the divisor (or whatever the right number is) is negative, the result will be negative. But that's still consistent with math modulus, as is the Python approach. I always miss Python mod when I go to other languages, honestly...


Which is why newer languages just give you both :p. Zig (for example) has @mod and @rem for modulo and remainder and will complain when you use % on signed integers because it could be ambiguous and asks you to type in specifically which variant you intended to use.


Common Lisp has both `mod` and `rem` together with their matching operation `floor` and `truncate` respectively.

http://clhs.lisp.se/Body/f_mod_r.htm


This bit me the other way in JavaScript a couple of weeks back! I'd been used to Ruby which never returned negative numbers, but JavaScript follows C and it messed me up for 20 minutes.


It's even worse than most people seem to realize. For many years the ISO standard for C included the line:

"If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined"

Fortunately, this was changed at some point between C '99 and C '11, so that now the remainder is consistently defined across all implementations.


Across all implementations that fully support C99, which is not all implementations.


Perl can do both ways:

  $ perl -E 'say(4 % -3)'
  -2

  $ perl -E 'use integer;say(4 % -3)'
  1


Interpreter state ("what's been `use`d") completely changing the meaning of that expression sounds like the worst possible way to do it.


It is lexically scoped, so you can wall it off with a {} block. And there's 'no integer;' to turn it off as well.

It's a pattern in Perl, like "use strict;" for example, which is also lexically scoped and has a "no strict;" counterpart.


Maybe it amounts to the same thing, but the example you give is not what the original post is about, which is -4 % 3 . It is true that "use integer" affects both calculations.


Side question: what is the common behavior for mod on floating point? I have used it on integers plenty of times of course but not come across a situation where I would need similar behavior for floats. Is there something that's part of typical language's standard libraries or do you roll your own if you need it?


See std::fmod and std::remainer:

https://en.cppreference.com/w/cpp/numeric/math/fmod

https://en.cppreference.com/w/cpp/numeric/math/remainder

A valid use may be when approximating a periodic function via a polynomial (e.g. sin/cos), where you will want to first perform range reduction like std::fmod(x, 2 * pi).


Note that, of course, zero is not "a positive number" since we're operating on the integers here and "signed" zero isn't a thing for either C or Python integers, but I think we all know what Carmack meant here.


You would think modding a negative number -x by m should really mean taking the mod of the inverse of x in Z_m. Additive modular inverse. Eg -2 mod 3 = 1. In Z_3 we have 1 + 2 = 3 = 0, so -2 -> 1.


I've noticed this when switching contexts between Python and C.

As Carmack notes it's great for array/buffer indices - and bites you if you expect it in C!


They should have used “non-negative” instead of “positive.” At first glance of the title I thought “What? The Python mod operator never return a 0?”


Fun fact: SQL does the same as C, but calls it modulo (at least in postgres, and I assume it's the standard)


What other languages also do this? I know Lua is one.



To explain the key for those who don't want to read the article: "truncated" is C's behaviour, "floored" is Python's.


golang unfortunately also produces negative results from the mod operator

It was quite confusing trying to port this diff algorithm https://blog.robertelder.org/diff-algorithm/


A negative remainder is nonsense, isn't it?


"Famous person has tweeted something"

mouse cursor moves toward up-arrow


mod and rem are not the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: