Get started with App.net.

  Sign up now  
jnm
    @jws I'm using it on Linux, where it's pretty simple. A little indicator applet that sits in the tray and has about three settings (autostart boolean, zip/lat-long and color). So far so good.
    jnm
      Put f.lux on the laptop. Been awhile since I tried it last. Let's see if it sticks this time.
      jnm
        @bcb @jws Anymore it seems like you have to get past the HR bots, then the HR weasels. >.< I've been working on an information-dense, buzzword Bingo kind of resume, though, in hopes that it'll tickle the fancy of someone's OCR reader.
        jnm
          @jws It's the old experience Catch-22. It's hard to get a job based on a knack for breaking things. 😂
          jnm
            @jws I have. Local retail giant had a position open recently, but they wanted lots more experience than I have. The security community likes to grouse about the shortage of bodies, but they fail at providing on-ramps for those who are interested.
            jnm
              @keita It's pretty clunky. I opened a chat with their help monkeys, and he suggested I open a ticket. Evidently they're aware of slow deletes and not being able to delete non-empty buckets, but there were some new issues I raised with the 500 status codes.
              jnm
                @matigo But it probably *is* Apple's phone, and you're just using it under some ToS. #DMCAFTW / @larand
                jnm
                  @jws DuckDuckGo came up dry, but Google is my friend. I'll see what I can figure out. Thanks for the tip!
                  jnm
                    @jws I have no idea what that means. 😂 I did throw `time.sleep(60)` in before the continue a little while ago.
                    jnm
                      @33mhz It's clunky. And it's a learning experience, which is what a beta API is good for, IMO. It's served its purpose for me, and it saved us some heartache at the dayjob. We were considering it as a storage backend for some stuff.
                      jnm
                        FWIW, I threw a try/except/continue into the script to retry on 500s, but it'll fail essentially infinitely if I don't kill, wait, and restart.
                        jnm
                          I think -- and yes, it's a beta, so it's not a big deal -- that they're just not ready for primetime.
                          jnm
                            I checked my caps in my account, and as far as I can tell, I'm not hitting any limits.
                            jnm
                              Which is cool and all, except their API starts returning 500s before I can finish a single loop. Yesterday it would delete about 700 before returning a 500. Today I'm lucky to get 30.
                              jnm
                                There might be a way to fire off a bunch of delete requests in parallel, but I'm a n00b, so I'm looping through that list of 1000, then listing again, and so on.
                                jnm
                                  So once you've got the bucket ID, you can list the files. They'll give you about 1000 at a time. The delete endpoint takes one file at a time.
                                  jnm
                                    To delete a file, apart from authenticating you have to provide both file name and file ID #. The only way to get the file ID is to list files in the bucket, for which you need the bucket ID, so you have to list all buckets first.
                                    jnm
                                      So I decided to use the API to delete them all. And that's been ... interesting.
                                      jnm
                                        Backblaze doesn't let you delete a non-empty B2 bucket, though, and there's no bulk delete option in the API. You can select-all and delete on the web UI, but with 13k+ files in that bucket (remember the connection timeouts?) it chokes/dies.
                                        jnm
                                          I decided to ditch that particular experiment because that data was already offsited elsewhere, and I didn't feel like tracking down the source of the errors. So I was going to delete all the files.