Technology Can Do That, So Let’s Not

I wonder what technology is coming to and at the same time, I wonder if I’m just getting old.  I look at things that were normal for me at the peak of my programming days and wonder if older developers thought I was an idiot for doing things that way.  For example, did the old procedural programmers of old see object-oriented design as ridiculous, slow, and inefficient?  Maybe.  But OO programming is pretty much the standard now.

But for some reason, I am confused as to why implicit typecasting is suddenly “awesome".  We had that way back when in VB and Classic ASP and we were hated for it.  Then .NET came along and strong-typing became the thing to do.  Now we’re back to implicit typecasting and scripted languages just like we had with ASP.

But the thing that’s really got me confused is cloud computing, why everyone thinks it great to rely on someone else instead of relying on yourself.  I guess the argument is “they can do it so much better than we can, so why not let them.”  There’s no more building yourself up?  You have to start at the top?  Talk about immediate gratification.  That’s bitter old man talk, there.

At my job, a co-worker (thankfully not me) has an integration project using Amazon Web Services (AWS).  As best I can tell, it’s a web service that sits in front of a message queue system.  To be slightly vague about the project, our client sends us a request with a questionnaire.  We collect the responses to the questions and send each individual answer back to the client as a message via AWS.  This infrastructure was forced on us; not our choice.  So, my old-timey brain is thinking, “why must a unit of work (a completed questionnaire), be transmitted in discrete pieces when it needs to be a single unit on their end?”  The answer to this is “don’t worry about it.”  The reason is a new crazy programming concept: eventual consistency.  Apparently our client is so hip and modern, they are using both “the cloud” and “eventual consistency” in their application design.

Eventual Consistency is nothing new.  Airlines have been using it forever.  Did you lose your luggage?  Is it now five states away?  It will eventually get to you and everything will be fine.  FedEx started using it with SmartPost.  If you ever had something shipped via SmartPost, you could watch the package get shipped all over the country, but eventually it would get to you.  With every real-world application of Eventual Consistency, you are guaranteed to get what you want, but never sure when it will happen.  Why, in any case, this became an acceptable solution is beyond me.

To wrap this up, but to leave it with some final, head-shaking, “why is this acceptable” thoughts, here’s some of the documented guidelines when using Amazon Web Services:

  • When you make a request for new messages, you may only request up to 10 new messages at a time.
  • If you request 10 messages, you may not get 10.  You may get less than 10, even if there are more than 10 messages in the queue.
  • If there are a very small number of messages in the queue, you may get zero.
  • Despite the inability for AWS to deliver the messages you request when you request them, all of the messages are available for viewing through their control panel.
  • When you send a message, you get no acknowledgement that it was sent successfully.  If you did not get an error during sending, you assume it was sent successfully.
  • You have no idea if the message was delivered to the destination queue successfully.  You will only know when the receiver picks up the message, and that is send as an acknowledgement on another queue.  You must query that queue and match up the acknowledgements with your initial sent messages.
  • The acknowledgement queue has all the limitations of the aforementioned message requests.

This is true progress.

Comments are closed.