Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 30 other subscribers – JavaScript – Technology

Follow me on Twitter

Screen Shot 2014-05-04 at 5.31.52 PM

Why Comcast is the Worst Company in America


I never had a huge problem with Comcast. For me their internet uptime and speeds have been very reliable. When field technicians came out to our house they were often on time and quick to perform the services. Even so, we did cancel our television service over a year ago and haven’t looked back. This leaves us using Comcast (or XFINITY if anyone actually calls it this) only for their internet service. So as mentioned before I’ve never really had an issue with Comcast, until today.

I figured it couldn’t hurt to see if there were any other internet packages that would give me a better bang for the buck considering I may be replacing my old DOCSIS 2.0 modem in the near future. To do this I logged into my Comcast account and selected the “Add or Upgrade Service” link…

Screen Shot 2014-05-04 at 4.35.12 PM

It’s worth noting that we currently pay $66.95 for Comcast Internet and remember we have no bundled television service. You’ll also notice it doesn’t even list what speed tier we are on but download speeds are consistently around 15Mbps.

Screen Shot 2014-05-04 at 4.36.34 PM

The price breakdown on the back of the bill also doesn’t list any additional taxes or fees for the internet. Just a plain old $66.95.

Screen Shot 2014-05-04 at 4.36.52 PM

Back at the online upgrade page I reviewed the options for Internet and I saw the 50Mbps Blast! Tier for $61.95…

Screen Shot 2014-05-04 at 4.41.09 PM

Cool, looks like I can upgrade the speed of my internet and lower my costs. I’m not even mad about this. Companies change the prices of their services all the time and it’s not their responsibility to let me know the price dropped. I also know that sometimes these prices require bundled services, but if you look at the “Details and Restrictions” listed above there is no requirement to have additional services. I did check other packages and they explicitly state having having a subscription to TV or Voice is required…

Screen Shot 2014-05-04 at 4.42.11 PM

…but for the Blast! package at $61.95 the Details and Restrictions state no bundled services are required. Next I select the “Customize” button for this package…

Screen Shot 2014-05-04 at 4.48.00 PM

The BLAST! plan requires a DOCSIS 3.0 modem to achieve the advertised speeds so on the next page Comcast lets me know this and offers to include a modem. Thanks, but no thanks, I’ll buy my own. Note the “Monthly Total” price is still listed at $61.95.

Screen Shot 2014-05-04 at 3.37.57 PMI hit the Next button and on the next page I now have to select a ‘Self-Install’ package for $9.95? This starts to rub me the wrong way but whatever, I’ll make up the cost in monthly savings in less than two months.

Screen Shot 2014-05-04 at 4.55.43 PM

I hit the Next button again and I am ready to review and submit my order…

Screen Shot 2014-05-04 at 4.56.13 PM

Everything is looking good, no one-time charges, and price is still listed at $61.95. I clicked the “Submit Your Order” button and was then presented with this…

Screen Shot 2014-05-04 at 5.18.48 PM

What the heck is this? I’m only proceeding through a normal checkout process and submitting my order. Why do I need to talk with someone?

Long story short is the $61.95 prices displayed for me was not actually available to me. I needed to add multiple offerings of Comcast’s other services to receive this price. Never on any page before “submitting” my order was this communicated to me as you can see from all of the screenshots above. I was logged in to my Comcast account when I selected the “Upgrade Service” option. Comcast knew who I was and what services I currently subscribe to. They should not display prices like this that are not available to me when they know what my current subscriptions are. I’m fine with them showing prices with bundled services but this needs to be explicit and obvious. Not nonexistent.

If this is not a text book definition of bait-and-switch and false advertising then I don’t know what is.

After the web chat, which took way to long and was filled with incompetence, I was able to fill out a survey rating my experience with the upgrade process. I filled out the form, hit submit, and was then provided with this lovely message…

Screen Shot 2014-05-04 at 4.11.17 PM

I now understand why Comcast is often regarded as the worst company in America.

Looking for an Application Developer


At Arrowpointe we are growing and are looking to add another developer to the team. This is a great opportunity to join a small, growing, and nimble company in a hot space. Here is the official job posting ( and feel free to to check it out but I wan’t to talk about some of the other aspects of the role and type of person we are looking for.

We are looking for someone who is a self starter, competitive, and always learning. You have a little bit of that entrepreneurial spirit as we are still a small and growing. Someone who can identify ways to make a product better and then run with those ideas. If you want a job where someone gives you a list of specific developer tasks to complete this is not the job for you. You’ll be mocking up designs, building those out, getting feedback, and continually iterating to make the product better. And yes, you may actually need to assist with support calls once in awhile. It’s really not that bad and more times than not it provides incredible insight into how the features you build are actually used. Often times in ways you never intended or imagined.

Still here? Cool. The technical side of the work should also provide fun and challenging problems to work on. Our stack is comprised mostly of, Heroku, Google Maps, jQuery, Angular, node, and Express.

Here are a few things we are currently or recently worked on to give you a sense for the type of work you’ll be doing.

  • Built an HTML5 mobile mapping application for Salesforce1, still lots to do here
  • Built a node-webkit app to assist in demoing HTML5 mobile apps, this is used in the video above
  • Migrating parts of our app originally built with jQuery to Angular.js
  • Google is deprecating their map markers API so we had to build our own API with node and Express.js to generate dynamic colored map pins
  • Speaking of APIs we’ll be doing more in this space as we hit the limits of the platform and have to offload work to other services, mainly an Express.js based API running on Heroku
  • Looking into more GIS and shape/boundary analysis

Some other benefits:

  • We are bootstrapped (and profitable!) so we can do whatever we want without having to worry about investors or shareholders 😉
  • If you live in Seattle we can work together here, (but not everyday because working from home is cool too)
  • Doing things like this is perfectly acceptable

Interested? Feel free to reach out to me on Twitter @TehNrd or use the contact form here.

Screen Shot 2014-03-17 at 3.10.32 PM

Host multiple node.js apps on the same subdomain with Heroku?


At Arrowpointe we are currently looking into building some node.js backed APIs on Heroku but it didn’t take long for us to discover one specific limitation of the Heroku platform. There is no easy way to host multiple applications on the same domain, or more specifically the same subdomain.

Now why on earth would you want to have multiple Heroku apps on the same domain? There are lots of reasons. Consistent branding of your API endpoint, handling all authentication through one place, or implementing some sort of front line caching logic to name a few. Heck even just making redirect to your developer site may be useful. As the API grows this approach also makes it easier for you to build a modular API. It’s not uncommon for large web APIs to consist of many smaller apps all accessed through the same endpoint. Maybe /launchRockets is I/O bound and node.js is the perfect solution where as /buildRockets is more CPU bound and maybe node.js is not the right language for it to run optimally. Breaking this API apart into smaller pieces allows you to make more flexible design and architecture decisions.  Also, if someone messes up and commits a system breaking change to /launchRockets it won’t take down /buildRockets. (Please excuse me for not making these URLs more “RESTful” such as /rocket/launch or /rocket/build , these urls are just examples, relax)

Cool, so let’s say we want to create a front door for our API with . This is the number one requirement based on some of the reasons listed above. This will be how users access many different pieces of API functionality with different URLs like so:

From a code base perspective /buildRockets and /launchRockets are totally separate and totally different. This type of design is very difficult to accomplish with Heroku but we do have a few options. It is also worth noting how to split up and break apart your API is a very grey area. Maybe this should all be in the same app, maybe not. It really depends on the API, what it does, who will being using it, and more.

The Options

1) Use different subdomains and use 2 separate Heroku apps for each API. Something like:

This works but doesn’t address the requirement of being hosted on the same subdomain. This actually isn’t a terrible way of accomplishing this problem and may end up being the best option. This is also the approach Heroku recommended when I submitted this question to their support team.

2) Combine the code from both APIs/apps and host this in one Heroku app. This is the grey area mentioned above and really needs to be decided on a case by case basis.

3) Have 3 different Heroku apps. The first as a proxy ( that will look at the incoming request and redirect to or using a module like bouncy or http-proxy.

Option #3 is interesting, specifically because these are node.js apps. In a framework using a synchronous architecture for handling requests and responses option #3 wouldn’t work or scale well. The capacity of the proxy would need to be 1-1 for the two other apps essentially doubling the costs. A node.js proxy should be much more performant as it would not be leaving connections open waiting for the actual API apps to respond. Even so, I have concerns with this approach with regards to the ability to scale and monitor both the API apps and the proxy app. For example, if the API starts to slow down is this because of the proxy app being overwhelmed or the API app simply needing more dynos? There is no doubt this approach adds more complexity to monitoring.

I scoured the internet and stackoverflow looking for answers to this problem and there just aren’t a lot of good answers… or answers at all. Most relate to synchronous architectures where #3 doesn’t even make sense and the default answer is use nginx on an EC2. Infrastructure, servers, operating systems, load balancers….what is this strange talk 😉 . The whole point of Heroku is to avoid this type of problem and infrastructure management. Heroku suggested option #1 and then also using nginx, and my stackoverflow questions never got much of a response: . It seemed the best way to answer this question was to built it and see what happens, so this is exactly what I did.

Below are two node.js apps. One with a simple ‘hello world’ response to simulate the API app and another that proxies requests to this API app. Here is the API app, :

var http = require('http');
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + + ' died');
} else {
  http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hi from target app.');
  }).listen(process.env.PORT || 5000);

And here is the proxy app, :

var http = require('http'); 
var httpProxy = require('http-proxy');
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + + ' died');
} else {
  var proxy = httpProxy.createProxyServer({});
  http.createServer(function(req, res) { = '';
    proxy.web(req, res, { target: '' });
  }).listen(process.env.PORT || 5000);

If you view the web pages you’ll see they both return the same response, ‘Hi from target app.’, which is expected. These are both running on a 1X Heroku dyno.


The first thing I needed to determine was how much latency this proxy app was adding to the entire request cycle. I could try to test the latency between Heroku apps but what I really care about is end user latency so I simply performed 50 pings to each URL from my computer and looked at the results:

Direct to API: min/avg/max/stddev = 34.305/39.436/57.644/4.115 ms

With Proxy: min/avg/max/stddev = 35.414/38.870/56.269/3.480 ms

Going though the proxy was actually 1ms faster so we can safely say the proxy app is not adding any additional latency overhead. This isn’t unexpected considering both these apps are running in the same datacenter.

Throughput and Response Time

The next step is to determine the maximum usable capacity of the API app when used directly without the proxy. Using Apache Bench we will first set a baseline of performance for the API app. The goal here is to not max out req/sec but rather determine the average time per request for a very light load so we can determine a base line response time.

ab -k -n 30 -c 1
Requests per second:    8.18 [#/sec] (mean)
Time per request:       122.193 [ms] (mean)
Time per request:       122.193 [ms] (mean, across all concurrent requests)

Now let’s increase concurrent connections until request time hits somewhere around 200ms. Eventually I was able to confidently land at around 160 concurrent connections for this simple hello world API.

ab -k -n 20000 -c 160
Requests per second:    823.59 [#/sec] (mean)
Time per request:       194.272 [ms] (mean)
Time per request:       1.214 [ms] (mean, across all concurrent requests)

We max out somewhere around 820 req/sec maintaining a response time under 200ms. Not too shabby. Now let’s run the same test against our node.js proxy app and see if it can maintain 820 req/sec with 160 concurrent connections and a response time near or under 200ms.

ab -k -n 20000 -c 160
Requests per second:    256.72 [#/sec] (mean)
Time per request:       623.246 [ms] (mean)
Time per request:       3.895 [ms] (mean, across all concurrent requests)

Not surprisingly the proxy app is not able to match the same results as hitting the API app directly as average response time shot up to ~600ms. I lowered the number of concurrent connections and I was able to settle in on ~40 and still maintain a response time under 200ms.

ab -k -n 20000 -c 40
Requests per second:    195.59 [#/sec] (mean)
Time per request:       204.510 [ms] (mean)
Time per request:       5.113 [ms] (mean, across all concurrent requests)

Without the proxy we could get ~823 req/sec but when using the proxy this drops to ~195 req/sec. This is a significant drop and more than I expected. Yet a proxy app doing 195 req/sec on a single 1X Dyno is still probably plenty fast for a lot of use cases. More than likely your API app is going to be doing something more complex than ‘hello word’ and as long as it’s capacity is lower than 195 req/sec the proxy app will not get in your way.

Let’s see what happens when we scale the number of dynos on the proxy app to 4. If these dynos scale perfectly horizontal in a linear fashion this should be enough to maintain the original 160 concurrent connections.

ab -k -n 20000 -c 160
Requests per second:    622.65 [#/sec] (mean)
Time per request:       256.966 [ms] (mean)
Time per request:       1.606 [ms] (mean, across all concurrent requests)

Almost but not quite. What about using one 2X dyno with the original 160 concurrent connections.

ab -k -n 20000 -c 160
Requests per second:    490.64 [#/sec] (mean)
Time per request:       326.105 [ms] (mean)
Time per request:       2.038 [ms] (mean, across all concurrent requests)

A big improvement! Oddly enough I tried scaling up the 2X dynos and performance increase was marginal. And finally what about running one of the big daddy PX dynos:

ab -k -n 20000 -c 160
Requests per second:    606.67 [#/sec] (mean)
Time per request:       263.734 [ms] (mean)
Time per request:       1.648 [ms] (mean, across all concurrent requests)

Much closer to original performance but the cost/performance ratio here is terrible considering PX dynos run $500+ a month.

Take Aways

  • node-http-proxy can do ~195 req/sec with reasonable response time on a single 1X dyno
  • You will have to play with dyno quantity and type to find optimal performance
  • The proxy app is a single point of failure. If this goes down all other apps behind this proxy will not be accessible. You better be running at least 2 dynos on this proxy app.

Notice the title of this posted ended with a question mark. Yes, you can do this. Is it the correct way to architect your API and Heroku apps? Who knows? At the end of the day I don’t think there is any “correct” answer. You have to decide if having one API endpoint is worth the potential performance hit and complexity of managing multiple apps. If you need blazing performance don’t use the proxy. If you want a single entry point for all of your API calls it looks like using node-http-proxy will be able to scale and meet most use cases.

P.S. Want to help solve problems like this? Arrowpointe is hiring and looking for a killer dev. JavaScript skills required. Reach out to me if you are interested. Latency Tester – Because Speed Matters


Speed is important for web apps. This has been proven time and again by Amazon and google. Even if your web app isn’t a portal for direct revenue like e-commerce or advertisements speed is still important. Even if the functionality delivered by an app is awesome a slow app will reflect poorly and build resentment in the users.

So on to In my role at Arrowpointe, the premier provider of mapping and goe-analytic solutions for the platform, (oh such a shameless plug) we have been looking at larger architectural problems. The platform is great, I’m a huge advocate, but not every problem can be 100% solved on the platform. Many times you need to utilize outside services in the form of public APIs or even homebuilt solutions running on Heroku or AWS to fill the gaps that can not meet. Now that we may need to reach out to external services we start to loose control of speed. External web services will start to introduce latency to our web apps that may seriously degrade the user experience. To help measure this I’ve created a rudimentary app I’m calling the Latency Tester. Read More

Introducing sObject-Remote for Visualforce JavaScript Remoting


I’ve been doing a lot more client side dev lately and a big piece of this is working with JavaScript Remoting. While doing this work I found that doing basic CRUD operations with JavaScript Remoting was not as simple as it could be. I didn’t want to have to wire up insert, query, update, and delete functions for each type of object I was working with. I wanted a lightweight and fast JavaScript library that was completely dynamic and allows you to build and entire app client side utitlizing basic CRUD operations.

The result of this need is a small JavaScript library I created called sObject-Remote.

sObject-Remote is up on github now and I encourage you to take a look and kick the tires. There should be plenty of examples in the readme to give you an idea of how this library can be used. Here are a couple basic examples:

Create and Insert a record:

var acct = new sObject('Account',{Name: 'test', Industry: 'Aerospace'});

Query Records:

sObject.query('select Id, Name, Owner.Name from Account where Industry = \'Aerospace\' limit 5',function(sObjects,event){
    //Loop through the records returned by the query
    for(var i = 0; i < sObjects.length; i++){

Some of the highlights:

– Super simple sObject management for JavaScript
– Currently supports 4 basic CRUD operations (more DML operations to come)
– Supports DML options
– Works with any JavaScript framework
– No 3rd party JavaScript library dependencies

A good portion of this development was me becoming more familiar with JavaScript so I am definitely open to community feedback on how to make it better. Please report issues and let me know what you think.

Older Posts