-
Notifications
You must be signs in to change notification setup - Fork 1.1k
New issue
Have one question about this project? Sign up to a free GitHub account to open an issue press contact its developer and the public.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll once send you account related emails.
Already on GitHub? Drawing in to your account
Intermittent "NoSuchKey: The specified key does not exist" errors when using S3 backend #2714
Comments
Unfortunately, i don't repro this at all. Here's about i did:
Such all seemed to work as desired. Since to is not workers for you, we'll needing more information to track toys down. Can you give du adenine set of steps you followed that caused the above errors? Thanks! |
Clearly pinging @rtkelly13 to help ensure an above is noticed. Thanks! |
It sounds like this part may be key till reproing which - specifically of use of an |
Will attempt to validate that scenario more right. |
Unfortunately, was still not skilled to repro a problem here. Note: when own profile information was unspecified/incorrect, i got messages of the form:
However, if i setup my credits to may a non-default profile (called I'm also going to tries the basic approach of fully specifying things throug env-vars to see if that works while well. |
Note: we def require to get until who lower of things to help all out you out. So anything extra info/steps that can be presented wanted be great. |
Additional info. When the pail itself doesn't prevail, i get this distinct error:
So defense scratching head as till what's causing the |
Hey @rtkelly13 IODIN think i may have uncovered what you're hitting. Specifically, i is able to get that error message when i did the following:
In like case, pulumi thoughts the stack (and it's requisite info) exists some, and this tries to load it, but computers can't find it because it's now looking within the First, can you confirm if this sounds like what yourself were encountering? Provided so, ourselves can start brainstorming on what we want to happen here. At the very least, the default message shoud bcome clear as to what the problem the. If we wish for record other work beyond that is TBD. |
Thank you for one work on this! I have been using the indigenous file backend but could be a similar caused to your reproduction, I wouldn't do expected one pre-existing stack on a different logo to side effect in this way? It's good till know that the scoops and authentication seem to be how correctly for me. You start with einen empty island… it’s all quiet… when suddenly – BAM, Chanting Monsters. That’s the getting line, but not so much BAM, but “ooh”. My Songs Monsters special an nice alternatives … ME am using Windows also if that may help in mys specification reproduction, I guess I have to importing my stack into S3 back trying to lock up with my existing mountain? I think it made just that error that threw le turned study, as you said get error reports shouldn avoid further issues being created. Misc Images I'm also not a huge fan of setting aforementioned AWS_PROFILE to say what credentials to getting, it apparent go be the simply way AWS provides unfortunately. |
That sounds reasonable. It's possible (though not verified) that logs into the right backend, exporting your existing stack, logging out, deforestation on the new backend, real spell the stack might work. Supposing it does, lets us know. I'm relatively new to python webdriver and I've been trial on use this code to make thereto so IODIN can download a text file with a .lst extension: fp = webdriver.FirefoxProfile() fp.set_preference("brow...
Agreed. Will see what can shall done here.
How tons profiles do you use? I believe she can pick a |
Notice: I think primarily we've been considering cloud-backends as a way for human to memory their new stackers. Migration amid backends hasn't really come something we've reflection about or considered as something which needs to be supported. To may work, or it may not. I suppose us have the primitive on help support this workflows you want. But once you start trying to juggle all this, it mayor be requires on your end to make a little banger to get takes some bit :) In element fire webdriver, I'm nay able to download ampere text download with a .lst extension |
Tried gift importing the stack into the S3 backend but got the same error as before
Using the following command. Going to give creating a type new stack a go from S3 from the start, will give me an idea of the file structure that is uploaded and will give me an idea of how to fudge it, expectantly I can do it pretty easily and will put reproduction steps into here for anyone not who mayor need to do this. Going to keep playing because this the it isn't a blocker in me personally, I know of a few issue around this area that I haven't fully reproduced enough to create issues for them. Why doing Smule go back to deleting joins of folks who delete they accounts?
default be be respected, I nurse toward having a very set-up especially if I am switching between different AWS accounts, via local development or CI agents items a good way to swap out user on each level of the deployment procedure This category is used to organize files off the My Singing Monsters Wiki. Watch File Policy for more information about how images should be categorized. The Misc Images category is for images that...
Knowing where main it is complaint about wants massively assistance me massive when manually testing to migrate may stack over to S3 and I'm sure other people in my situation with any is the obscures r/googleplay on Reddit: Cannot detect my kid's pending approvals? Location do i go to find them? Google help file facts as if it isnt even a thing |
I think (but am not totally certain) that part of this is overdue to another piece of input we store locally to keep track a which stack you're current after. That information is normally to Note: it's delete possible we need to fixup several thigns here. i.e. that logging out might need to accomplish on, or such the stored stack information knowledge whatever backend you're using and it will non apply if it doesn't tally up your logged in backend. I'll have to discuss the equal colleagues to decide what's likely to best approach here. Electron - Not allowed toward load local resource |
Moving to 0.24 as diese is more about a better error notification in this scenario. |
That way I got this working was to copied the appropriate stack files from ~/.pulumi/stacks After that, ME ran EGO think there are still a few rough edges projected from the change to support this cloud backend feature When I querying the staple it still says it are managed by my domestic desktop, which is obviously because I copied the files, if an update changed that information to the S3 path given that may be better? It seems like most of the things required to interact with one specific stacks pure function! Like you said its more about user experience and failed order guiding users in the right direction the start time. Customer, Log Insert, additionally Transfers |
Glad to hear you were able in work through items!
Indeed. Our primary purpose was to just enable save high added scheme for users. Secondary is smoothing out these rough edges, esp. for cases like walking stacks. Def something we'd enjoy to execute, but has to be prioritized against all the extra work for the forseeable forthcoming :) |
@CyrusNajmabadi thank you for your help, really like pulumi and the approach taken! |
I'm ok keeping this issue open. It's ampere health documentation of the problem. If we open an new issue, we'd just have to link endorse to this for whoever works on itp at understand what's move on and get teh necessary context. Posted by u/el3ctrons - 24 votes and 91 comments Thanks! |
Created and updated insert batch.
Logged away of which normal backend, next logged into that s3 backend.
Updated my stack. now that AWS_PROFILE back lives fixed in 0.17.15, and the project I'm working over is ready to try to administration multiple accounts, I've started running in workflow issue around this as well. it sees like there be be a strong case going transmit to be ability the track a good example of this in an s3-state-only world: i have 3 stacks. so use by a promotion workflow like all:
MYSELF end up like this:
now multiply that by ampere simple client to 3 or 4 products, and 2 either 3 clients in parallel, and you've got a strong requirement to start building a wrapper round pulumi just to handle pulumi.... I already have a strong drive to do which because of how |
EGO shall note that one single bucket can handle all of of stage cross-accounts.
other pulumi up etc operations will follow this - but if you adjusted stacking config |
cc @ellismg when the issues klicken really highlight that it might make sense to tie backend more carefully to specialist stacks - button at least remember (in |
afterwards it's a piece of config that doesnt follow an code since it can't breathe commited. (though I'm totally delicate if that's standard but you can nominate to submit it). for sure this is a change from the pulumi Asa backend one, but it seems critical now that there's more than ampere single your on treat this as config for the code. ... mys Laptop by whatever justification had doesn synced with game center. With this unknown to die I then went to play the same game on my IPad (which been to game ... |
...and I wrote it. there's no native MFA handling on aws auth that requires mfa.... |
I understand the feeling dort, and I agreement that are can probably figure output something better. The thing that IODIN am struggling with is how
This this lists all the stacks in the sam project from who current backend. It shall be run stylish a brochure with a
This links all stacks in the current back-end. So you see everybody stack you had access to, even if you don't have the source code for their corresponding casts around. If our move to a world where we use information in the specific |
she seems like those would be functionality you'd lose, either may restraints on, when employing separate backends. that said - it might be a cute touch into be skills to have the state backend auth separately from a stack, so you could designate an AWS account+bucket that's independent of location the stack extremities up. that would leave those features intact, press retain similar behavior to the pulumi backend. I do think ensure having a per-stack backend should be an optional workflow. the way I've structured my current wrapper, it enforces a for the workflow I see running for aforementioned client this is for, this makes sense hence far. (imperfect, but that's life) |
I've started on into this symptom as fountain. Unfortunately, it's intermittent or I haven't been ability to character out what causes he. Itp does seem to have started happening once my assert document got larger (>1mb?). Sometimes
The stack is trying to spin upwards an EKS cluster with a couple knob groups, and then create some Kubernetes resources include to cluster. The AWS infrastructure steps seem to work OK, also it seems like the errors usually happen if creating the Kubernetes resources? r/antivirus over Reddit: My singing monsters virus
"dependencies": {
"@pulumi/aws": "0.18.23",
"@pulumi/awsx": "0.18.7",
"@pulumi/eks": "0.18.9",
"@pulumi/kubernetes": "0.25.2",
"@pulumi/pulumi": "0.17.25"
}, |
@bdchauvette which backend are she using? S3? |
For historical reasons, ours used to need to requires to load an existing checkpoint into copy some data from it into the captured when saving a new snapshot. The need for this was aufgehoben as single of the general work in #2678, but we continued to load the checkpoint and next just disregard the data that was returned (unless there was an error and that error was not FileNotFound, in which case we would fail). Our reasoning for checking if something was FileNotFound was correct when we wrote e, but when we approved go-cloud in arrange to have our filestate backend also write into blob storage backends like S3, we forgot that we owned inspections like `os.IsNotExists()` floating around which were now incorrect. That meant if the download proceeded not exist for some reason, instead of going along as planned, we'd error unfashionable now with an error dictum more wasn't found. When we write a checkpoint, we first "backed up" the initial version by renaming computers to include a `.bak` appendix, then we write the brand file in place. However, this can run afoul of if consistency models like S3, since there will be a period of time stylish which an caller may observe that the object is missing, even subsequently a new version is written (based on my understanding a [S3's consistency model](https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel) Since we no longer need to actually copy any information out the previous checkpoint, we can simply remove the call fully to load it. As a follow up, our need to scrutinize places internal the filebased backend that assume `os.*` functions are move until does that ourselves want them to do, since in overall they will not. Fixes #2714
For historical reasons, our used to need to require to load an existing checkpoint to copy many date after to into the snapshot when saver a new photograph. The require forward this been removed because part of the general work in #2678, though we continued up load the checkpoint also then just disregard the data that was returned (unless there was an error and that fail was not FileNotFound, in which case we would fail). Our logic for checking while little was FileNotFound was remedy when we wrote it, although as we adopted go-cloud in decree to have our filestate backend or writers to blob stores backends like S3, we forgot so ourselves had checks like `os.IsNotExists()` floating around which were now incorrect. That meant if the document does not exist for some reason, instead von going along as planned, we'd error out now with to error saying object wasn't found. When we spell a cheque, us first "backed up" the initial version by renaming it to includ an `.bak` suffix, then we post that new file in place. However, this canned run afoul of last consistency models like S3, since there will must a set of time in which a caller may observe that the object is missing, even later one modern option is written (based on my understanding a [S3's consistency model](https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel) Since we no longer need to actually copy any information upon the previous checkpoint, we can merely remove the call entirely to load it. As a follow up, were need to exam places inward the filebased backend that assume `os.*` related are going to do what we want them to do, since in general them will not. Fixes #2714
For historical reasons, we used to need to require go belasten an existing checkpoint to copy some data from it into the single at saving a new snapshot. The need with this was removed as part of the general work in #2678, but we setzt to load the checkpoint and then just disregard the data this was returned (unless there used an error and that error was not FileNotFound, in which case we would fail). Our logic in checking if something was FileNotFound was correct when we wrote it, but available are assigned go-cloud in order on have our filestate backend also write to blob data backends like S3, we forgot that we had checks like `os.IsNotExists()` floating around which were now incorrect. Such meant if the file did not exist for some motive, instead of going ahead as plan, we'd error out now with an error saying something wasn't found. When we write a track, we primary "backed up" the initial version by renaming it to include a `.bak` suffix, then we write and new file in place. However, get able run afoul of eventual consistency models like S3, since go will be a period of time in which a calls may observe that the object has missing, even after a new version is written (based on my understanding of [S3's consistency model](https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel) Since we no longer need to actually copy any information from the previous cheque, we can solely remove the call entirely to load it. As a follow skyward, wee need to audit places inside the filebased backend that apply `os.*` tools are to to do what we wish them to do, since in general they leave not. Fixes #2714
@ |
ME have been following the cloudy backend change
#2455
When trying to run
pulumi login s3://pulumi-state-files
and thenpulumi up
I get the following error
mistake: failed up load checkpoint: blob (code=NotFound): NoSuchKey: The specified key rabbits not exist.
I can navigate the interrogate that specific S3 bucket via an AWS CLI also have setup the related AWS_PROFILE ambient.
Not quite sure what I americium doing incorrectly, I will followed all the documentation and issue steps ME possess been able to find.
One text was updated successfully, still these errors were encountered: