Latency Tester – Because Speed Matters


Speed is important for web apps. This has been proven time and again by Amazon and google. Even if your web app isn’t a portal for direct revenue like e-commerce or advertisements speed is still important. Even if the functionality delivered by an app is awesome a slow app will reflect poorly and build resentment in the users.

So on to In my role at Arrowpointe, the premier provider of mapping and goe-analytic solutions for the platform, (oh such a shameless plug) we have been looking at larger architectural problems. The platform is great, I’m a huge advocate, but not every problem can be 100% solved on the platform. Many times you need to utilize outside services in the form of public APIs or even homebuilt solutions running on Heroku or AWS to fill the gaps that can not meet. Now that we may need to reach out to external services we start to loose control of speed. External web services will start to introduce latency to our web apps that may seriously degrade the user experience. To help measure this I’ve created a rudimentary app I’m calling the Latency Tester.

This app is a simple Visualforce page that sends a URL string to an Apex method using Visualforce Remoting. This method then makes a callout from Apex to a super simple hello world node.js app running on Heroku. Two timers are set throughout this operation. The first when a button is clicked on the client side and this ends when the pages receives a response from the server. The second timer is set before the Apex callout to Heroku and ends when Apex receives a response. This also test the latency between the servers and the Heroku servers, something I’ve always been curious about.

It is important to note this is actually testing application latency and not pure networking latency. The the whole request must first move through client side JavaScript, has to spin up its software stack to make the callout, the receiving server software handles and provides a response, and then again this is handled by the JavaScript on the client at the very end.

This app is on GitHub here:
…here is an unmanaged package:
…and here is a video walk-through:

There is no tab in the app so navigate directly to the /latency page.

The Results!

Screen Shot 2013-06-03 at 9.17.22 PM

For Apex making a callout to Heroku the average response time I see is around 20ms. I have seen this as low as 10ms and as high as several hundred. This isn’t too bad considering this is application latency and not true network latency. For comparison purposes when I ping the target site directly from my laptop I get 36ms almost every time. So wins and it should considering there is fat fiber pipe between and Heroku (AWS) data centers.

For client side application latency it seems to average around 250ms. Not terrible…but not great and I think there is room for improvement here. Again for comparison purposes when I ping directly from my laptop I see times around 40ms. This means the remaining 200ms is most likely the software stack handling the request and providing the response. Given this is super simple request with no database operations I’d really like to see this in the low 100ms.

All in all, a fun little experiment and I’m curious what other individuals see around the world and country. Please post the times you are seeing below in the comments section.