It’s Good To Be A Coder

While driving over the weekend, I had a humorous thought that I think I can stretch out into an extended joke.  The base premise of the joke is simple, just recaptioning an old comic strip to change the context of the comic and the dynamic between the characters.

So, to begin this, I need the original comic strips so I can deface them and make them funny in a different way.  I found the website that was hosting the comics and looked into what effort it would take to get these comics.

Here’s a small lesson in image thievery on the internet.  The most pedestrian way to get an image off the internet is to do a screen capture with something like Snipping Tool.  That route will burn you out in a matter of minutes.

The better way is to right-click the image and choose Save As.  This preserves the original image dimensions without any possible variance from lassoing.  However, web developers have gotten smarter about this technique and will do protective coding to prevent the unwanted downloading of their files.  One way to do this is to capture the mouse’s right-button click.  Other way is to overlay a transparent image over the other so when you right-click and Save As, you save the transparent image instead of the one you really want.  The most effective way is to show the image as a CSS background.  However, the truth is, if it’s in your browser, your browser requested the file and you can do it too.

To get around these tricks, you can use the developer tools built into most browsers.  They will allow you to look at the source code and find the URL of the actual image you want.  That is the path I originally took, finding the image tag, copying the URL into a new tab, then downloading/saving the image.  That lasted for about 7 images before it was too much effort.  Coders are lazy, and they write programs to do the work for them.

Looking at the URL for the image, it was a dynamic URL, not a static one.  It was similar to:

http://thewebsite.com/content.php?file=QXJlbid0IHlvdSBjbGV2ZXI/

At first, I was discouraged, because the file parameter just seemed to be a string of random characters.  There wouldn’t be any way to turn that into a reliable sequence to cycle through.  But the more I looked at the URL, the more familiar the text seemed.  I took a guess that the string was Base64-encoded, and my guess was correct.  Decoding the string resulted in another URL, although that URL was not accessible from the internet.  It was a page that “content.php” had access to, though. (Just as an aside, this programming design screams “security issues!”)

The decoded URL had a very understandable structure that would allow cycling through comics based on date.  It’s just that I would need to construct that URL, encode it in Base64, then pass the encoded URL as a parameter to the content.php address.

I fired up Visual Studio, added a datebox and a button.  I wrote 4 lines of code to construct the URL based on the date in the datebox, then download and save the image. Then I set up the button to decrement the datebox by a day and process the image.  Now, All I had to do was click a button over and over and the images would dump into a folder.  If I wanted to, I could set up a loop to cycle through images and I wouldn’t even need to click the button at all.  This was less than 15 minutes of effort.

And that is yet another example of the power that comes from being a programmer.

Comments are closed.