-
-
Notifications
You must be signed in to change notification settings - Fork 19.1k
PERF: fix performance regression from #62542 #62623
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What makes this faster than the original code? Seems like we've only added instructions to the conversion function(s), so I'm worried we are overlooking something
pandas/_libs/src/parser/tokenizer.c
Outdated
} else { | ||
*error = ERROR_OVERFLOW; | ||
return 0; | ||
break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we try to keep these are immediate returns? This opens up the door to ambiguous behavior of the parser in case of "multiple" failures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal of the changes in #62542 was because of this branch, where big numbers that indicate float were cast to string due to overflow.
Can I still check posterior characters to change the error code? If not, I don't think there is anything to do in this PR, and it's best to revert the problematic commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put back the immediate return, but added an inline function before it to check if it's not an integer after the overflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah OK thanks - that's helpful. I somewhat disagree with the premise of that change to cast to float even if its a lossy operation. I understand that in some cases there is a desire for numeric operations on numbers like that, but its unclear that should take precedence over the string cast, which is in some sense more "value preserving".
The larger issue is that pandas does not have native support for Decimal precision types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I somewhat disagree with the premise of that change to cast to float even if its a lossy operation.
Even if it's a lossy operation, I don't think it comes up to an integer parsing function to decide that it should be a string. The way that I setup this PR and the one on #62542 makes that it skips trying to parse the word as an integer by verifying that it's not an integer. It still tries to parse as other datatypes in the way that they are prioritized here
pandas/pandas/_libs/parsers.pyx
Line 1072 in 1863adb
for dt in self.dtype_cast_order: |
I don't give much value for the performance increase that I reported on the first edit. I still need to update the description after all these changes. Just waiting for all the changes that I made are considered correct. Additionally, The changes should only apply for integer parsing, everything else can be considered noise. |
@WillAyd I updated the benchmark results on the description. There were no significant changes. |
pandas/_libs/src/parser/tokenizer.c
Outdated
return self->seen_uint && (self->seen_sint || self->seen_null); | ||
} | ||
|
||
static inline void check_for_invalid_char(const char *p_item, int *error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you document what this function does? The name check_for_invalid_char
is a bit too vague - this is better described as something like cast_char_p_as_float
no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd also suggest that you either return int and drop the int *
argument, or return something useful (ex: return the parsed float value) and then set the pointer value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a Doxygen comment. It's also returning the pointer to the last verified character.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept the error pointer in the function to prevent code duplication, and also because the main purpose of the function is just to assign a value to it. Considering that it's now returning the position of the last verified character, it's possible to change the error value outside the function, but I think it's more clean the way it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it returning a char *? It doesn't look like you are using the return value anywhere.
I'm leaning more towards the former approach unless there is a reason for it to return a value; its really common practice to return an integral code to denote an error or not (even in C++ you'll see Arrow do this all over the place). Tucking that return value away in a pointer is far less common.
Its also more performant to return the int value directly, although in this particular case that's probably too far down the stack to be noticable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it to return a status code.
pandas/_libs/src/parser/tokenizer.c
Outdated
p_item++; | ||
} | ||
|
||
while (*p_item != '\0' && isspace_ascii(*p_item)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be combined with the previous loop? Is there a reason for trailing whitespace to be handed specially, or is there a reason at all to allow whitespace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It needs a separate loop because this case should be invalid "7890123 1351713789"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we allow trailing white space though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is permitted below too if an overflow doesn't occur. I added it in this function to make it consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we doing this at all though? Are we stripping trailing whitespace for any other case in the tokenizer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we doing this at all though?
I couldn't find any particular reason in the code to do this.
Are we stripping trailing whitespace for any other case in the tokenizer?
Every function that parses a string in tokenizer.c
ignores leading and trailing whitespace.
I changed the behavior to just check for the character after consuming all digits. The way that the function was used before didn't change the pointer position after calling it.
This function now don't permit trailing whitespace.
pandas/_libs/src/parser/tokenizer.c
Outdated
} else { | ||
*error = ERROR_OVERFLOW; | ||
return 0; | ||
break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah OK thanks - that's helpful. I somewhat disagree with the premise of that change to cast to float even if its a lossy operation. I understand that in some cases there is a desire for numeric operations on numbers like that, but its unclear that should take precedence over the string cast, which is in some sense more "value preserving".
The larger issue is that pandas does not have native support for Decimal precision types
Additionally, don't permit trailing whitespace.
pandas/_libs/src/parser/tokenizer.c
Outdated
d = *++p; | ||
} else { | ||
*error = ERROR_OVERFLOW; | ||
int status = check_for_invalid_char(p); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why tokenizer.h defines errors individually, but it would be much easier if you created a struct like:
enum TokenizerError {
TOKENIZER_OK,
ERROR_NO_DIGITS,
ERROR_OVERFLOW,
ERROR_INVALID_CHARS
};
Then you can just assign *error = check_for_invalid_char(p)
and let the call stack naturally handle this (where 0 is no error).
Although its still a bit strange to apply an error on the preceding line then reassign it here. I wonder if there shouldn't be a generic function to check for errors here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think the name ERROR_INVALID_CHARS
is a little vague - maybe ERROR_INVALID_FLOAT_STRING
is better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why tokenizer.h defines errors individually, but it would be much easier if you created a struct
I even was thinking about doing it, but decided against it, thinking that this refactor would pollute this PR.
I also think the name ERROR_INVALID_CHARS is a little vague - maybe ERROR_INVALID_FLOAT_STRING is better?
This function checks for any invalid character, not specific to floating characters, so I think that ERROR_INVALID_CHARS
is preferable.
I wonder if there shouldn't be a generic function to check for errors here
Like a function that overwrite the current error by some precedence?
For example: TOKENIZER_OK < ERROR_NO_DIGITS < ERROR_OVERFLOW < ERROR_INVALID_CHARS
If it's currently TOKENIZER_OK
it would always be overwritten. ERROR_INVALID_CHARS
would take precedence above all: would always substitute the current error and will never be overwritten?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although its still a bit strange to apply an error on the preceding line then reassign it here.
Another option would be an if-else statement. The overflow error should always be assigned if we don't find an invalid character.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I refactored the code for int64
and uint64
to use the enum. Additionally, I preferred to use if-else
to assign the error code.
Another PR would be required to create and handle errors in other tokenizers.
pandas/_libs/src/parser/tokenizer.c
Outdated
* @return Integer 0 if the remainder of the string contains only digits, | ||
* otherwise returns the error code for [ERROR_INVALID_CHARS]. | ||
*/ | ||
static inline int check_for_invalid_char(const char *p_item) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the length of the string as an argument? I realize this is a static function, but its still best to guard against buffer overruns in case of future refactor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This information is not available in any of the parent functions. So I would have to call strlen
to use it. I don't see much value in it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its about minimizing the risk during refactor. C is not an inherently safe language, so you need to be somewhat paranoid when writing functions.
You are correct in that at face value calling strlen
is pretty...well dumb. But its a sign that a refactor can happen in another PR to better keep track of the length of a string while processing it
data[i] = str_to_int64(word, INT64_MIN, INT64_MAX, | ||
&error, parser.thousands) | ||
if error != 0: | ||
if error != TOKENIZER_OK: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pattern would definitely be cleaner with a macro to return of non-zero (in a follow up PR is fine)
* @return TOKENIZER_OK if the remainder of the string contains only digits, | ||
* otherwise returns ERROR_INVALID_CHARS. | ||
*/ | ||
static inline TokenizerError check_for_invalid_char(const char *p_item) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my comment got lost but you should add the length of the string to the arguments here, so that we can mitigate the risk of buffer overruns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I responded to it. The length of the string is not available in the parent functions, so I would have to call strlen
to use this function, and strlen
also relies on the null character.
} else { | ||
*error = ERROR_OVERFLOW; | ||
TokenizerError status; | ||
if ((status = check_for_invalid_char(p)) > ERROR_OVERFLOW) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding ordering semantics to the enum isn't really clear on the intent. Probably best to make the function check for more than invalid chars and return the appropriate error code, rather than handling this logic multiple times in the caller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably best to make the function check for more than invalid chars and return the appropriate error code
Currently, this function is only called after an integer overflow, and it starts checking from the character that caused the overflow. Considering its usage for integer parsing, I don't see others error codes that it should return.
Adding ordering semantics to the enum isn't really clear on the intent.
Should I just compare with TOKENIZER_OK
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting - I wasn't thinking of the full context here but you bring up some interesting points.
I guess I'm wondering why we even need this function at all. Long ago, pandas only supported C89, and even when C99 was around it took a long time for MSVC to support that. However, those days are long behind us.
C99 offers the strtoull
function that would seemingly replace everything this function is trying to do.
#include <errno.h>
#include <inttypes.h>
#include <stdlib.h>
#include <stdio.h>
int main() {
char* endptr;
errno = 0;
uint64_t val = strtoull("123456789123456789123456789", &endptr, 0);
if (*endptr != '\0') {
printf("End of string not reached\n");
return errno;
}
if (errno == EINVAL) {
printf("Invalid value\n");
return errno;
}
if (errno == ERANGE) {
printf("Value is out of range\n");
return errno;
}
printf("My value is: %lu\n", val);
return errno;
}
So maybe we can use the standard function instead of rolling our own logic here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I'm always looking forward to simplifying things. But I don't think it will be as straightforward as calling strtoull
mainly because of the thousand separator, which would require some preprocessing before calling this function.
If the thousand separator doesn't exist, then I think it will be very simple.
Do you think that using strtoull
should go into this PR or have a dedicated one?
read_csv()
fails to detect floats larger than 2.0^64 #51295 (Replace xxxx with the GitHub issue number)doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.Benchmarks
asv continuous -f 1.1 -E virtualenv:3.13 "db31f6a38353a311cc471eb98506470b39c676d8~" HEAD -b io.csv
asv compare db31f6a38353a311cc471eb98506470b39c676d8~ HEAD
IO.csv benchmarks:
cc @WillAyd