Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-48922][SQL] Avoid redundant array transform of identical expression for map type #50245

Closed
wants to merge 2 commits into from

Conversation

wForget
Copy link
Member

@wForget wForget commented Mar 12, 2025

What changes were proposed in this pull request?

Similar to #47843, this patch avoids ArrayTransform in resolveMapType function if the resolution expression is the same as input param.

Why are the changes needed?

My previous pr #47381 was not merged, but I still think it is an optimization, so I reopened it.

During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts.

There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary.

map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656 

Does this PR introduce any user-facing change?

No

How was this patch tested?

added unit test

Was this patch authored or co-authored using generative AI tooling?

No

@wForget
Copy link
Member Author

wForget commented Mar 12, 2025

@viirya @dongjoon-hyun @kazuyukitanimura could you please take a look?

Some(
Alias(nullCheckedInput, expected.name)(
nonInheritableMetadataKeys =
Seq(CharVarcharUtils.CHAR_VARCHAR_TYPE_STRING_METADATA_KEY)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain why you want remove the CHAR_VARCHAR_TYPE_STRING_METADATA_KEY from metadata?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just want to be consistent with #47843

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@viirya Could you help me understand why remove the CHAR_VARCHAR_TYPE_STRING_METADATA_KEY from metadata?

Copy link
Member

@viirya viirya Mar 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I don't remember the exact reason on the removal. Maybe it is because it brings additional checks or transformation. As it is the input expression used to write to the table, the metadata is useless (because the writer would check on table columns).

Copy link
Contributor

@beliefer beliefer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, LGTM. cc @viirya @dongjoon-hyun

@beliefer beliefer closed this in 1be108e Mar 13, 2025
beliefer pushed a commit that referenced this pull request Mar 13, 2025
…ssion for map type

### What changes were proposed in this pull request?

Similar to #47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param.

### Why are the changes needed?

My previous pr #47381 was not merged, but I still think it is an optimization, so I reopened it.

During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts.

There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary.

```
map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656
```

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

added unit test

### Was this patch authored or co-authored using generative AI tooling?

No

Closes #50245 from wForget/SPARK-48922.

Authored-by: wforget <[email protected]>
Signed-off-by: beliefer <[email protected]>
(cherry picked from commit 1be108e)
Signed-off-by: beliefer <[email protected]>
@beliefer
Copy link
Contributor

@wForget @viirya @dongjoon-hyun Thank you!
Merged into 4.0/master

@beliefer
Copy link
Contributor

@wForget Could you create backport PR for branch-3.5

@wForget
Copy link
Member Author

wForget commented Mar 13, 2025

@wForget Could you create backport PR for branch-3.5

Sure, I will create later

wForget added a commit to wForget/spark that referenced this pull request Mar 13, 2025
…ssion for map type

### What changes were proposed in this pull request?

Similar to apache#47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param.

### Why are the changes needed?

My previous pr apache#47381 was not merged, but I still think it is an optimization, so I reopened it.

During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts.

There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary.

```
map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656
```

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

added unit test

### Was this patch authored or co-authored using generative AI tooling?

No

Closes apache#50245 from wForget/SPARK-48922.

Authored-by: wforget <[email protected]>
Signed-off-by: beliefer <[email protected]>

(cherry picked from commit 1be108e)
@kazuyukitanimura
Copy link
Contributor

Thank you @wForget late LGTM

beliefer pushed a commit that referenced this pull request Mar 13, 2025
…expression for map type

### What changes were proposed in this pull request?

Backports #50245 to 3.5

Similar to #47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param.

### Why are the changes needed?

My previous pr #47381 was not merged, but I still think it is an optimization, so I reopened it.

During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts.

There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary.

```
map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656
```

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

added unit test

### Was this patch authored or co-authored using generative AI tooling?

No

Closes #50245 from wForget/SPARK-48922.

Authored-by: wforget <643348094qq.com>
Signed-off-by: beliefer <beliefer163.com>

(cherry picked from commit 1be108e)

Closes #50265 from wForget/SPARK-48922-3.5.

Authored-by: wforget <[email protected]>
Signed-off-by: beliefer <[email protected]>
anoopj pushed a commit to anoopj/spark that referenced this pull request Mar 15, 2025
…ssion for map type

### What changes were proposed in this pull request?

Similar to apache#47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param.

### Why are the changes needed?

My previous pr apache#47381 was not merged, but I still think it is an optimization, so I reopened it.

During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts.

There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary.

```
map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656
```

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

added unit test

### Was this patch authored or co-authored using generative AI tooling?

No

Closes apache#50245 from wForget/SPARK-48922.

Authored-by: wforget <[email protected]>
Signed-off-by: beliefer <[email protected]>
kazemaksOG pushed a commit to kazemaksOG/spark-custom-scheduler that referenced this pull request Mar 27, 2025
…ssion for map type

### What changes were proposed in this pull request?

Similar to apache#47843, this patch avoids ArrayTransform in `resolveMapType` function if the resolution expression is the same as input param.

### Why are the changes needed?

My previous pr apache#47381 was not merged, but I still think it is an optimization, so I reopened it.

During the upgrade from Spark 3.1.1 to 3.5.0, I found a performance regression in map type inserts.

There are some extra conversion expressions in project before insert, which doesn't seem to be always necessary.

```
map_from_arrays(transform(map_keys(map#516), lambdafunction(lambda key#652, lambda key#652, false)), transform(map_values(map#516), lambdafunction(lambda value#654, lambda value#654, false))) AS map#656
```

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

added unit test

### Was this patch authored or co-authored using generative AI tooling?

No

Closes apache#50245 from wForget/SPARK-48922.

Authored-by: wforget <[email protected]>
Signed-off-by: beliefer <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants