You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create some state loclly, probably with the null provider or something
Provision an IAM role with an inline policy that does not have the s3:DeleteObject permission
Set the role arn to the access_role.role_arn attribute in the s3 backend configuration
Run terraform init -migrate-state
No error is thrown
Run terraform plan/terraform apply and you should now get an error that the lock file could not be deleted.
Debug Output
No response
Panic Output
No response
Important Factoids
TLDR: -migrate-state appears to be swallowing errors. In my case, it's the AccessDenied error from S3 for missing permissions to delete the lock file.
I provisioned some infrastructure running terraform locally on my machine. Had a local state file. Went to implement the s3 backend. Noticed the new lockfile feature so I set use_lockfile=true in the backend configuration. Provisioned the IAM role following the permissions required guide for S3 State Backends. Had an IAM role with GetObject and PutObject provisioned. I set the role arn to the role_arn attribute inside of the assume_role map in the s3 backend configuration and ran terraform init -migrate-state.
In the below I'm highlighting where I executed the command executes followed by the next command I teed up which I'll talk about in just a sec. You can see that no error was thrown
$ aws-vault exec REDACTED -- terraform init -migrate-state
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.84.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ aws-vault exec REDACTED -- terraform apply
As you can see I follow up the init with an apply so I could ensure that the state was migrated successfully
$ aws-vault exec REDACTED -- terraform apply
╷
│ Error: Error acquiring the state lock
│
│ Error message: operation error S3: PutObject, https response error StatusCode: 412, RequestID: REDACTED, HostID: REDACTED, api error PreconditionFailed: At least one of the pre-conditions you
│ specified did not hold
│ Lock Info:
│ ID: 1390ee95-f07d-6341-f065-570f96390187
│ Path: REDACTED/REDACTED-s3.tfstate
│ Operation: migration destination state
│ Who: REDACTED@REDACTED
│ Version: 1.10.5
│ Created: 2025-01-28 21:11:11.403129 +0000 UTC
│ Info:
│
│
│ Terraform acquires a state lock to protect the state from being written
│ by multiple users at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the "-lock=false"
│ flag, but this is not recommended.
I checked the bucket via the AWS Console and sure enough my lock file was still there. I wasn't sure what had happened, but since it's an experimental feature, I thought maybe it was a one off, so I decided to just delete the lock file and reattempt my apply.
$ aws-vault exec REDACTED -- terraform apply
State refresh stuff and what now, trimmed for brevity
....
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Releasing state lock. This may take a few moments...
╷
│ Error: Error releasing the state lock
│
│ Error message: failed to delete the lock file: operation error S3: DeleteObject, https response error StatusCode: 403, RequestID: REDACTED, HostID: REDACTED, api error AccessDenied: User:
│ REDACTED is not authorized to perform: s3:DeleteObject on resource: "REDACTED" because no identity-based policy allows the s3:DeleteObject action
│ Lock Info:
│ ID: d7b921d6-dc23-01bd-2b41-9f8131688838
│ Path: REDACTED/REDACTED-s3.tfstate
│ Operation: OperationTypeApply
│ Who: REDACTED@REDACTED
│ Version: 1.10.5
│ Created: 2025-01-28 21:12:02.711953 +0000 UTC
│ Info:
│
│
│ Terraform acquires a lock when accessing your state to prevent others
│ running Terraform to potentially modify the state at the same time. An
│ error occurred while releasing this lock. This could mean that the lock
│ did or did not release properly. If the lock didn't release properly,
│ Terraform may not be able to run future commands since it'll appear as if
│ the lock is held.
│
│ In this scenario, please call the "force-unlock" command to unlock the
│ state manually. This is a very dangerous operation since if it is done
│ erroneously it could result in two people modifying state at the same time.
│ Only call this command if you're certain that the unlock above failed and
│ that no one else is holding a lock.
╵
Ahh, there we go, my state management role is missing the s3:DeleteObject permission. Went over to another workspace that I used to provision it, added the permission, applied, waited 10 seconds because IAM Eventual Consistency and then reapplied and everything went fine.
I typed all of this to say that whatever functionality is backing the migrate-state flag is looks to be swallowing the AccessDenied error that most likely was thrown when I migrated the state
I did notice that this section highlights the need for s3:DeleteObject but it is tucked away in the section talking about using workspaces. Since we're not using workspaces, I skipped over this section.
Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
Volunteering to Work on This Issue
If you are interested in working on this issue, please leave a comment.
If this would be your first contribution, please review the contribution guide.
Hey @ddouglas 👋 Thank you for taking the time to raise this! While the AWS Provider Team will ultimately need to look into this, backend processes are handled by Terraform itself, rather than providers. With that in mind, I'm going to transfer this issue over to the appropriate repository for appropriate tracking.
Terraform Core Version
v1.10.5
AWS Provider Version
v5.84.0
Affected Resource(s)
State Migrations I guess
Expected Behavior
Error is thrown during state imports
Actual Behavior
No error was thrown
Relevant Error/Panic Output Snippet
Terraform Configuration Files
Steps to Reproduce
terraform init -migrate-state
terraform plan
/terraform apply
and you should now get an error that the lock file could not be deleted.Debug Output
No response
Panic Output
No response
Important Factoids
TLDR:
-migrate-state
appears to be swallowing errors. In my case, it's the AccessDenied error from S3 for missing permissions to delete the lock file.I provisioned some infrastructure running terraform locally on my machine. Had a local state file. Went to implement the s3 backend. Noticed the new lockfile feature so I set
use_lockfile=true
in the backend configuration. Provisioned the IAM role following the permissions required guide for S3 State Backends. Had an IAM role with GetObject and PutObject provisioned. I set the role arn to the role_arn attribute inside of the assume_role map in the s3 backend configuration and ranterraform init -migrate-state
.In the below I'm highlighting where I executed the command executes followed by the next command I teed up which I'll talk about in just a sec. You can see that no error was thrown
As you can see I follow up the init with an apply so I could ensure that the state was migrated successfully
I checked the bucket via the AWS Console and sure enough my lock file was still there. I wasn't sure what had happened, but since it's an experimental feature, I thought maybe it was a one off, so I decided to just delete the lock file and reattempt my apply.
Ahh, there we go, my state management role is missing the
s3:DeleteObject
permission. Went over to another workspace that I used to provision it, added the permission, applied, waited 10 seconds because IAM Eventual Consistency and then reapplied and everything went fine.I typed all of this to say that whatever functionality is backing the
migrate-state
flag is looks to be swallowing the AccessDenied error that most likely was thrown when I migrated the stateReferences
s3:DeleteObject
but it is tucked away in the section talking about using workspaces. Since we're not using workspaces, I skipped over this section.Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: