One of the first things I wrote about when I started this blog was my workaround solution for exporting from TechSmith Snagit to Amazon S3. That worked okay for Windows but I’ve started working on Mac significantly more of late and I missed that functionality. As such, I took another look at options for this since TechSmith itself still hasn’t developed a Snagit to S3 output for either Windows or Mac.
I feel like exporting on Mac shouldn’t be a problem. There’s no S3 Browser available but you could replace it with s3cmd and do the same thing. The catch: There’s no Program Output option in Snagit Mac. That’s right, on Windows you can essentially make your own outputs but on Mac you’re out of luck.
I came up with a workaround, though. It’s not pretty but it works. It also works on Windows, but with better options available I’m not sure there’s a reason to use it.
I use ExpanDrive to map my S3 buckets as a local drive. Then I can save from Snagit straight to the location I want in S3. That part’s great. It’s pretty much seamless. ExpanDrive is a really awesome tool. Probably too expensive if all you’re using it for is Snagit exporting, but worth taking a look at if you’re working with S3 in other ways.
The problem is you don’t get the uploaded URL out of this. That’s where it gets hacky.
I wrote a Chrome extension that gets me a list of the last five files uploaded to this particular S3 bucket. So after saving my file, I have to go to my browser to get its URL. Extra steps. The bonus is that I can get the URL any time later.
Since the ExpanDrive part of it works out of the box, here’s the breakdown of my Chrome extension.
I start with a script on the server side that uses the AWSSDKforPHP2 to read in the files from my filebox, sort by date, and grab the five most recent. Those five are then spit out as JSON.
To access that file from the Chrome extension, it’s important to include that domain in the permissions section of the manifest.json file. Also necessary is the clipboardWrite permission. In addition to the required manifest file, the extension uses a single HTML page, a stylesheet (which I’ll skip here since how it looks doesn’t really matter), and a Javascript file. There are also some images but I’ll skip those, too.
The important things here are the UL element with the ID of content and the inclusion of main.js. The UL will be targeted by our JS for dynamically adding elements.
The request_data function wraps a call to the PHP script noted above. Onload of that data, we call populate_list.
The first thing we do in populate_list is parse the text we got from the PHP script into an actual JSON object. Then we remove any list items we may have in our previously-mentioned UL. We loop through each of the items in our JSON object and create new elements for them. Each item gets an LI with an A inside it. The A has an HREF of the item’s URL and the TARGET is set to _blank so it opens in a new window. Additionally, we use the copy_to_clipboard method that I grabbed from someone’s GitHub to save that URL to the clipboard, setting it as an onclick event for the A tag.
I’m certain that this could be cleaned up and made more configurable and turned into a publicly-available extension but I’m not going to bother with it. I figured I’d put this out and hope that it helps someone.
I will say that one idea I’m intrigued by is replacing the PHP script with an AWS Lambda function that triggers any time the S3 bucket is updated. I’m not entirely certain how that would work but it seems possible.
I wonder if you could use something like fswatch (https://github.com/emcrisostomo/fswatch) and pbcopy to eliminate the Chrome step. fswatch could watch the directory where your images were saved and that would trigger the step to upload to s3, figure out the resulting url, and copy that to your clipboard.
I haven’t used fswatch so I can’t vouch for it, but it looks like it’d work.
I like that thought. I’ll look into it when I get a chance.