-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-48922][SQL] Avoid redundant array transform of identical expression for map type #50245
Conversation
…ssion for map type
@viirya @dongjoon-hyun @kazuyukitanimura could you please take a look? |
Some( | ||
Alias(nullCheckedInput, expected.name)( | ||
nonInheritableMetadataKeys = | ||
Seq(CharVarcharUtils.CHAR_VARCHAR_TYPE_STRING_METADATA_KEY))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain why you want remove the CHAR_VARCHAR_TYPE_STRING_METADATA_KEY
from metadata?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to be consistent with #47843
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@viirya Could you help me understand why remove the CHAR_VARCHAR_TYPE_STRING_METADATA_KEY
from metadata?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't remember the exact reason on the removal. Maybe it is because it brings additional checks or transformation. As it is the input expression used to write to the table, the metadata is useless (because the writer would check on table columns).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, LGTM. cc @viirya @dongjoon-hyun
…ssion for map type ### What changes were proposed in this pull request? Similar to #47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param. ### Why are the changes needed? My previous pr #47381 was not merged, but I still think it is an optimization, so I reopened it. During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts. There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary. ``` map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656 ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? added unit test ### Was this patch authored or co-authored using generative AI tooling? No Closes #50245 from wForget/SPARK-48922. Authored-by: wforget <[email protected]> Signed-off-by: beliefer <[email protected]> (cherry picked from commit 1be108e) Signed-off-by: beliefer <[email protected]>
@wForget @viirya @dongjoon-hyun Thank you! |
@wForget Could you create backport PR for branch-3.5 |
Sure, I will create later |
…ssion for map type ### What changes were proposed in this pull request? Similar to apache#47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param. ### Why are the changes needed? My previous pr apache#47381 was not merged, but I still think it is an optimization, so I reopened it. During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts. There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary. ``` map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656 ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? added unit test ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#50245 from wForget/SPARK-48922. Authored-by: wforget <[email protected]> Signed-off-by: beliefer <[email protected]> (cherry picked from commit 1be108e)
Thank you @wForget late LGTM |
…expression for map type ### What changes were proposed in this pull request? Backports #50245 to 3.5 Similar to #47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param. ### Why are the changes needed? My previous pr #47381 was not merged, but I still think it is an optimization, so I reopened it. During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts. There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary. ``` map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656 ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? added unit test ### Was this patch authored or co-authored using generative AI tooling? No Closes #50245 from wForget/SPARK-48922. Authored-by: wforget <643348094qq.com> Signed-off-by: beliefer <beliefer163.com> (cherry picked from commit 1be108e) Closes #50265 from wForget/SPARK-48922-3.5. Authored-by: wforget <[email protected]> Signed-off-by: beliefer <[email protected]>
…ssion for map type ### What changes were proposed in this pull request? Similar to apache#47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param. ### Why are the changes needed? My previous pr apache#47381 was not merged, but I still think it is an optimization, so I reopened it. During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts. There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary. ``` map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656 ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? added unit test ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#50245 from wForget/SPARK-48922. Authored-by: wforget <[email protected]> Signed-off-by: beliefer <[email protected]>
…ssion for map type ### What changes were proposed in this pull request? Similar to apache#47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param. ### Why are the changes needed? My previous pr apache#47381 was not merged, but I still think it is an optimization, so I reopened it. During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts. There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary. ``` map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656 ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? added unit test ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#50245 from wForget/SPARK-48922. Authored-by: wforget <[email protected]> Signed-off-by: beliefer <[email protected]>
What changes were proposed in this pull request?
Similar to #47843, this patch avoids ArrayTransform in
resolveMapType
function if the resolution expression is the same as input param.Why are the changes needed?
My previous pr #47381 was not merged, but I still think it is an optimization, so I reopened it.
During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts.
There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary.
Does this PR introduce any user-facing change?
No
How was this patch tested?
added unit test
Was this patch authored or co-authored using generative AI tooling?
No