Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 14 other subscribers

Force.com – JavaScript – Technology

Follow me on Twitter

Looking for an Application Developer

03/28/2014

At Arrowpointe we are growing and are looking to add another developer to the team. This is a great opportunity to join a small, growing, and nimble company in a hot space. Here is the official job posting (http://www.arrowpointe.com/careers/developer) and feel free to to check it out but I wan’t to talk about some of the other aspects of the role and type of person we are looking for.

We are looking for someone who is a self starter, competitive, and always learning. You have a little bit of that entrepreneurial spirit as we are still a small and growing. Someone who can identify ways to make a product better and then run with those ideas. If you want a job where someone gives you a list of specific developer tasks to complete this is not the job for you. You’ll be mocking up designs, building those out, getting feedback, and continually iterating to make the product better. And yes, you may actually need to assist with support calls once in awhile. It’s really not that bad and more times than not it provides incredible insight into how the features you build are actually used. Often times in ways you never intended or imagined.

Still here? Cool. The technical side of the work should also provide fun and challenging problems to work on. Our stack is comprised mostly of salesforce.com, Heroku, Google Maps, jQuery, Angular, node, and Express.

Here are a few things we are currently or recently worked on to give you a sense for the type of work you’ll be doing.

  • Built an HTML5 mobile mapping application for Salesforce1, still lots to do here
  • Built a node-webkit app to assist in demoing HTML5 mobile apps, this is used in the video above
  • Migrating parts of our app originally built with jQuery to Angular.js
  • Google is deprecating their map markers API so we had to build our own API with node and Express.js to generate dynamic colored map pins
  • Speaking of APIs we’ll be doing more in this space as we hit the limits of the force.com platform and have to offload work to other services, mainly an Express.js based API running on Heroku
  • Looking into more GIS and shape/boundary analysis

Some other benefits:

  • We are bootstrapped (and profitable!) so we can do whatever we want without having to worry about investors or shareholders ;-)
  • If you live in Seattle we can work together here, http://impacthubseattle.com (but not everyday because working from home is cool too)
  • Doing things like this is perfectly acceptable

Interested? Feel free to reach out to me on Twitter @TehNrd or use the contact form here.

Screen Shot 2014-03-17 at 3.10.32 PM

Host multiple node.js apps on the same subdomain with Heroku?

03/17/2014

At Arrowpointe we are currently looking into building some node.js backed APIs on Heroku but it didn’t take long for us to discover one specific limitation of the Heroku platform. There is no easy way to host multiple applications on the same domain, or more specifically the same subdomain.

Now why on earth would you want to have multiple Heroku apps on the same domain? There are lots of reasons. Consistent branding of your API endpoint, handling all authentication through one place, or implementing some sort of front line caching logic to name a few. Heck even just making http://api.mysite.com redirect to your developer site may be useful. As the API grows this approach also makes it easier for you to build a modular API. It’s not uncommon for large web APIs to consist of many smaller apps all accessed through the same endpoint. Maybe /launchRockets is I/O bound and node.js is the perfect solution where as /buildRockets is more CPU bound and maybe node.js is not the right language for it to run optimally. Breaking this API apart into smaller pieces allows you to make more flexible design and architecture decisions.  Also, if someone messes up and commits a system breaking change to /launchRockets it won’t take down /buildRockets. (Please excuse me for not making these URLs more “RESTful” such as /rocket/launch or /rocket/build , these urls are just examples, relax)

Cool, so let’s say we want to create a front door for our API with https://api.mysite.com . This is the number one requirement based on some of the reasons listed above. This will be how users access many different pieces of API functionality with different URLs like so:

https://api.mysite.com/buildRockets
https://api.mysite.com/launchRockets

From a code base perspective /buildRockets and /launchRockets are totally separate and totally different. This type of design is very difficult to accomplish with Heroku but we do have a few options. It is also worth noting how to split up and break apart your API is a very grey area. Maybe this should all be in the same app, maybe not. It really depends on the API, what it does, who will being using it, and more.

The Options

1) Use different subdomains and use 2 separate Heroku apps for each API. Something like:

https://buildRocket.mysite.com/design
https://launchRocket.mysite.com/ignite

This works but doesn’t address the requirement of being hosted on the same subdomain. This actually isn’t a terrible way of accomplishing this problem and may end up being the best option. This is also the approach Heroku recommended when I submitted this question to their support team.

2) Combine the code from both APIs/apps and host this in one Heroku app. This is the grey area mentioned above and really needs to be decided on a case by case basis.

3) Have 3 different Heroku apps. The first as a proxy (http://mysite-api-proxy.herokuapp.com) that will look at the incoming request and redirect to http://buildRocket.herokuapp.com or http://launchRocket.herokuapp.com using a module like bouncy or http-proxy.

Option #3 is interesting, specifically because these are node.js apps. In a framework using a synchronous architecture for handling requests and responses option #3 wouldn’t work or scale well. The capacity of the proxy would need to be 1-1 for the two other apps essentially doubling the costs. A node.js proxy should be much more performant as it would not be leaving connections open waiting for the actual API apps to respond. Even so, I have concerns with this approach with regards to the ability to scale and monitor both the API apps and the proxy app. For example, if the API starts to slow down is this because of the proxy app being overwhelmed or the API app simply needing more dynos? There is no doubt this approach adds more complexity to monitoring.

I scoured the internet and stackoverflow looking for answers to this problem and there just aren’t a lot of good answers… or answers at all. Most relate to synchronous architectures where #3 doesn’t even make sense and the default answer is use nginx on an EC2. Infrastructure, servers, operating systems, load balancers….what is this strange talk ;) . The whole point of Heroku is to avoid this type of problem and infrastructure management. Heroku suggested option #1 and then also using nginx, and my stackoverflow questions never got much of a response: http://stackoverflow.com/questions/22395653/how-to-host-multiple-node-js-apps-on-the-same-subdomain-with-heroku . It seemed the best way to answer this question was to built it and see what happens, so this is exactly what I did.

Below are two node.js apps. One with a simple ‘hello world’ response to simulate the API app and another that proxies requests to this API app. Here is the API app, http://api-target.herokuapp.com :

var http = require('http');
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
 
if (cluster.isMaster) {
 
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }
 
  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
    cluster.fork();
  });
 
} else {
 
  http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hi from target app.');
  }).listen(process.env.PORT || 5000);
 
}

And here is the proxy app, http://api-http-proxy.herokuapp.com :

var http = require('http'); 
var httpProxy = require('http-proxy');
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
 
if (cluster.isMaster) {
 
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }
 
  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
    cluster.fork();
  });
 
} else {
 
  var proxy = httpProxy.createProxyServer({});
 
  http.createServer(function(req, res) {
    req.headers.host = 'api-target.herokuapp.com';
    proxy.web(req, res, { target: 'http://api-target.herokuapp.com' });
  }).listen(process.env.PORT || 5000);
 
}

If you view the web pages you’ll see they both return the same response, ‘Hi from target app.’, which is expected. These are both running on a 1X Heroku dyno.

Latency

The first thing I needed to determine was how much latency this proxy app was adding to the entire request cycle. I could try to test the latency between Heroku apps but what I really care about is end user latency so I simply performed 50 pings to each URL from my computer and looked at the results:

Direct to API: min/avg/max/stddev = 34.305/39.436/57.644/4.115 ms

With Proxy: min/avg/max/stddev = 35.414/38.870/56.269/3.480 ms

Going though the proxy was actually 1ms faster so we can safely say the proxy app is not adding any additional latency overhead. This isn’t unexpected considering both these apps are running in the same datacenter.

Throughput and Response Time

The next step is to determine the maximum usable capacity of the API app when used directly without the proxy. Using Apache Bench we will first set a baseline of performance for the API app. The goal here is to not max out req/sec but rather determine the average time per request for a very light load so we can determine a base line response time.

ab -k -n 30 -c 1  http://api-target.herokuapp.com/
 
Requests per second:    8.18 [#/sec] (mean)
Time per request:       122.193 [ms] (mean)
Time per request:       122.193 [ms] (mean, across all concurrent requests)

Now let’s increase concurrent connections until request time hits somewhere around 200ms. Eventually I was able to confidently land at around 160 concurrent connections for this simple hello world API.

ab -k -n 20000 -c 160 http://api-target.herokuapp.com/
 
Requests per second:    823.59 [#/sec] (mean)
Time per request:       194.272 [ms] (mean)
Time per request:       1.214 [ms] (mean, across all concurrent requests)

We max out somewhere around 820 req/sec maintaining a response time under 200ms. Not too shabby. Now let’s run the same test against our node.js proxy app and see if it can maintain 820 req/sec with 160 concurrent connections and a response time near or under 200ms.

ab -k -n 20000 -c 160 http://api-http-proxy.herokuapp.com/
 
Requests per second:    256.72 [#/sec] (mean)
Time per request:       623.246 [ms] (mean)
Time per request:       3.895 [ms] (mean, across all concurrent requests)

Not surprisingly the proxy app is not able to match the same results as hitting the API app directly as average response time shot up to ~600ms. I lowered the number of concurrent connections and I was able to settle in on ~40 and still maintain a response time under 200ms.

ab -k -n 20000 -c 40 http://api-http-proxy.herokuapp.com/
 
Requests per second:    195.59 [#/sec] (mean)
Time per request:       204.510 [ms] (mean)
Time per request:       5.113 [ms] (mean, across all concurrent requests)

Without the proxy we could get ~823 req/sec but when using the proxy this drops to ~195 req/sec. This is a significant drop and more than I expected. Yet a proxy app doing 195 req/sec on a single 1X Dyno is still probably plenty fast for a lot of use cases. More than likely your API app is going to be doing something more complex than ‘hello word’ and as long as it’s capacity is lower than 195 req/sec the proxy app will not get in your way.

Let’s see what happens when we scale the number of dynos on the proxy app to 4. If these dynos scale perfectly horizontal in a linear fashion this should be enough to maintain the original 160 concurrent connections.

ab -k -n 20000 -c 160 http://api-http-proxy.herokuapp.com/
Requests per second:    622.65 [#/sec] (mean)
Time per request:       256.966 [ms] (mean)
Time per request:       1.606 [ms] (mean, across all concurrent requests)

Almost but not quite. What about using one 2X dyno with the original 160 concurrent connections.

ab -k -n 20000 -c 160 http://api-http-proxy.herokuapp.com/
Requests per second:    490.64 [#/sec] (mean)
Time per request:       326.105 [ms] (mean)
Time per request:       2.038 [ms] (mean, across all concurrent requests)

A big improvement! Oddly enough I tried scaling up the 2X dynos and performance increase was marginal. And finally what about running one of the big daddy PX dynos:

ab -k -n 20000 -c 160 http://api-http-proxy.herokuapp.com/
Requests per second:    606.67 [#/sec] (mean)
Time per request:       263.734 [ms] (mean)
Time per request:       1.648 [ms] (mean, across all concurrent requests)

Much closer to original performance but the cost/performance ratio here is terrible considering PX dynos run $500+ a month.

Take Aways

  • node-http-proxy can do ~195 req/sec with reasonable response time on a single 1X dyno
  • You will have to play with dyno quantity and type to find optimal performance
  • The proxy app is a single point of failure. If this goes down all other apps behind this proxy will not be accessible. You better be running at least 2 dynos on this proxy app.

Notice the title of this posted ended with a question mark. Yes, you can do this. Is it the correct way to architect your API and Heroku apps? Who knows? At the end of the day I don’t think there is any “correct” answer. You have to decide if having one API endpoint is worth the potential performance hit and complexity of managing multiple apps. If you need blazing performance don’t use the proxy. If you want a single entry point for all of your API calls it looks like using node-http-proxy will be able to scale and meet most use cases.

P.S. Want to help solve problems like this? Arrowpointe is hiring and looking for a killer dev. JavaScript skills required. Reach out to me if you are interested.

Force.com Latency Tester – Because Speed Matters

06/04/2013

Speed is important for web apps. This has been proven time and again by Amazon and google. Even if your web app isn’t a portal for direct revenue like e-commerce or advertisements speed is still important. Even if the functionality delivered by an app is awesome a slow app will reflect poorly and build resentment in the users.

So on to Force.com. In my role at Arrowpointe, the premier provider of mapping and goe-analytic solutions for the salesforce.com platform, (oh such a shameless plug) we have been looking at larger architectural problems. The force.com platform is great, I’m a huge advocate, but not every problem can be 100% solved on the platform. Many times you need to utilize outside services in the form of public APIs or even homebuilt solutions running on Heroku or AWS to fill the gaps that force.com can not meet. Now that we may need to reach out to external services we start to loose control of speed. External web services will start to introduce latency to our web apps that may seriously degrade the user experience. To help measure this I’ve created a rudimentary app I’m calling the Force.com Latency Tester. Read More

Introducing sObject-Remote for Visualforce JavaScript Remoting

03/11/2013

I’ve been doing a lot more client side dev lately and a big piece of this is working with JavaScript Remoting. While doing this work I found that doing basic CRUD operations with JavaScript Remoting was not as simple as it could be. I didn’t want to have to wire up insert, query, update, and delete functions for each type of salesforce.com object I was working with. I wanted a lightweight and fast JavaScript library that was completely dynamic and allows you to build and entire app client side utitlizing basic CRUD operations.

The result of this need is a small JavaScript library I created called sObject-Remote.

sObject-Remote is up on github now and I encourage you to take a look and kick the tires. There should be plenty of examples in the readme to give you an idea of how this library can be used. Here are a couple basic examples:

Create and Insert a record:

var acct = new sObject('Account',{Name: 'test', Industry: 'Aerospace'});
 
acct.insert(function(result,event){
    console.log(result[0].Id);
});

Query Records:

sObject.query('select Id, Name, Owner.Name from Account where Industry = \'Aerospace\' limit 5',function(sObjects,event){
    //Loop through the records returned by the query
    for(var i = 0; i < sObjects.length; i++){
        console.log(sObjects[i]);
    }
});

Some of the highlights:

- Super simple sObject management for JavaScript
- Currently supports 4 basic CRUD operations (more DML operations to come)
- Supports DML options
- Works with any JavaScript framework
- No 3rd party JavaScript library dependencies

A good portion of this development was me becoming more familiar with JavaScript so I am definitely open to community feedback on how to make it better. Please report issues and let me know what you think.

JavaScript Remoting and Formatting Dates

03/05/2013

First off, I’m back! You may have noticed my blog posts and community involvement over the past 6 months has been almost nonexistent. In my previous job I had slowly started to move alway from being involved in app development and lacked the daily intercourse with code that inspired so many of my previous posts. If you haven’t heard I left my previous job of six years to join Arrowpointe and together with Scott Hemmeter we plan to do great things in the geolocation space. In my new position as Director of Application Development I am more involved once again with building apps and solving technical problems. So now…let’s talk about JavaScript Remoting and formatting dates on the client side.

If you don’t already know, JavaScript Remoting is awesome. It allows you to directly call Apex methods from JavaScript, returns the response in JSON, and is super snappy and reponkinsive (sound). You can read more about it here. What is not awesome about JavaScript Remoting is how it handles Date and DateTime fields. In actuality it handles these fields fine but the platform itself leaves something to be desired. Let’s take a look at the following JavaScript that returns the CreatedDate field from an Account.

sObject.query('select Id, Name, CreatedDate from Account limit 1',function(result,event){
    console.log(result[0].CreatedDate);
    var d = new Date(result[0].CreatedDate);
    console.log(d);
});

The output will be the following.

1362028097000 //The number of milliseconds since January 1st, 1970
Wed Feb 27 2013 21:08:17 GMT-0800 (PST)

(Oh…and if you are wondering what that sObject.query() magic is, stay tuned….that is the subject of an upcoming blog post.)

Hmm… well neither of those are too pretty and being the astute global developers we are we want to make sure the Date/Time is formatted to the users locale. Something like “2/27/2013 9:08 PM” in America and “27/02/2013 21:08″ in UK. Let’s go follow the options I went through trying to figure this out.

1) JavaScript Date.toLocaleString()
The Date object in JavaScript does have a toLocaleString() method but don’t even think about using it. Frist, it’s inconsistent and doesn’t always work like you would expect it to. Second, when salesforce.com returns the time portion of a date it is already adjusted based on the users selected time zone, which is nice, but toLocalString() method shifts it again making the time wrong.

2) Use a Library like jQuery Localize or Date.js
Yes, this will be perfect! False. I thought I could use the salesforce.com locale values in tandem with these libraries but the values salesforce.com uses for locales are in a non-standard format. It’s actually not salesforce.com’s fault as I think (hope) they created their whole localization engine and rules before these standards where in place. The two options here is to have users enter and store another locale value that is standards compliant. Or for each of the existing salesforce.com locale values (100+) create a little JavaScript file that has the formatting rules. Neither of which is ideal.

3) Go to stackexchange.com and as for help
Here someone pointed me to a JavaScript object called UserContext (can view by entering UserContext in browser console when logged in to salesforce.com) and it provides all sorts of great info about the user but specifically it included the Date and DateTime masks for date formatting, M/d/yyyy and M/d/yyyy h:mm a. Progress! Yet after looking for a library that could take this format and parse it correctly I once again came up dry. After looking at UserContext object in more detail and digging through some other salesforce.com JavaScript files I found the solution.

Buried within the salesforce.com JavaScript files there is an object called DateUtil with two very useful methods, getDateTimeStringFromUserLocale(date) and getDateStringFromUserLocale(date). These two methods do exactly what they look like. You pass in a normal JavaScript date variable and they will return a nice formatted String based on the user’s selected locale.

So we are good now right? Not exactly. At the very top of the JavaScript File in which these methods exist is this nice little message from salesforce.com:

/*
 * This code is for Internal Salesforce use only, and subject to change without notice.
 * Customers shouldn't reference this file in any web pages.
 */

Well, that’s no fun! But rules were meant to be broken right…..right? Ya! We are hardcore rule breaking developers! In fact I’ve got some NWA and Wu Tang Clan playing right now… at the same time! Let’s be safe about our rule breaking though, and seriously, this could break at anytime so proceed at your own risk.

First we need to have some default date/time formatting should our call to the DateUtil method encounter any issues. Let’s revisit the original example with some slight modifications. Make note of the comments in the code.

sObject.query('select Id, Name, CreatedDate from Account limit 1',function(result,event){
    //Convert CreatedDate to a JavaScript date oject
    var d = new Date(result[0].CreatedDate);
 
    //Remove first 4 characters (day), the seconds and the GMT offset
    var dateString = d.toString();
    dateString = dateString.substring(4,dateString.lastIndexOf(':'));
 
    //Attempt to format datetime using sfdc DateUtil method
    try{ 
        dateString = DateUtil.getDateTimeStringFromUserLocale(d);
    }catch(err){
        //Fail silently or alert devs DateUtil method is no longer working
    }
 
    //Output the formatted date string
    console.log(dateString);
});

The result will be the following output:

2/27/2013 9:08 PM //If DateUtil method was success
 
Feb 27 2013 21:08 //DateUtil failed and we used default formatting

Beautiful!

There is one last issue that may creep up. If your Visualforce page has the showHeader attribute set to false these salesforce.com JavaScript files will not be loaded into the page. The easy fix is to simply include a Visualforce inputField component that is bound to a Date or DateTime field in salesforce.com and hide it. This will load the date formatting JavaScript files. Like so:

<div style="display:none;">
    <apex:inputField value="{!opp.CloseDate}"/>
</div>

Hope this was helpful and stay tuned for more client side JavaScript related blog posts.

Older Posts