On NativeScript Performance


Last week NativeScript made it into public beta and just for a few days we got tremendous amount of feedback. One question that came up over and over again was, “How do NativeScript Apps Perform”?  In this post, I want to explain the details behind performance and share some great news with you about the upcoming release of NativeScript.

How it started

As other new projects NativeScript started from the idea to take a new look at the cross-platform mobile development with JavaScript. In the beginning, we had to determine if the concept of NativeScript was even feasible.  Should we translate JavaScript into Java?  What about Objective-C back into JavaScript?  During this exploratory phase, we learned that the answer was actually much simpler than this thanks to the JavaScript bridge that exists for both iOS and Android.  Well, thanks to Android fragmentation, this is only partially true.  Let me explain…


Working on a project like NativeScript is anything but easy. There are many challenges imposed by working with two very different runtimes like Dalvik and V8. Add the restricted environment in Android and you will get the idea. Controlling object lifetime when you have two garbage collectors, efficient type marshalling, lack of 64bit integers in JavaScript, correctly working with different UTF-8 encodings, and overloaded method resolution, just to name a few. All these are nontrivial problems.

Statically Generated Bindings

One specific problem is the extending/subclassing of Java types from JavaScript. It is astonishing how a simple task like working with a UI widget becomes a challenging technical problem. It takes no longer to look than the Button documentation and its seemingly innocent example.

button.setOnClickListener(new View.OnClickListener() {
    public void onClick(View v) {
        // Perform action on click

While the Java compiler is there for you to generate an anonymous class that implements View.OnClickListener interface there is no such facility in JavaScript. We solved this problem by generating proxy classes (bindings). Basically we generated *.java source files, compiled them to *.class files which in turn were compiled to *.dex files. You can find these *.dex files in assets/bindings folder of every NativeScript for Android project. The total size of these files is more than 12MB which is quite a lot.

Here begins the interesting part. Android 5 comes with a new runtime (ART). One of major changes in ART is the ahead-of-time (AOT) compiler. Now you can imagine what happens when the AOT compiler has to compile more than 12MB *.dex files on the very first run of any NativeScript for Android application. That’s right, it takes a long time. The problem is less apparent in Android 4.x but it is still there.

Dynamically Generated Bindings

The solution is obvious. We simply need to generate bindings in runtime instead of compile time. The immediate advantages are that we will generate bindings only for those classes that we actually extend in JavaScript. Lesser the bindings, lesser the work for the AOT compiler.

We started working on the new binding generator right after the first private beta. We were almost done for the public beta. However, almost doesn’t count. We decided to play safe and release the first beta with statically generated bindings. The good news is that the new binding generator is already merged in the master branch (only two days after the public beta announcement).

Today I ran some basic performance tests on the following devices:

  • Device1 – Nexus 5, Android 4.4.1, build KOT49E
  • Device2 – Nexus 6, Android 5.0.1, build LRX22C

For the tests I used the built-in time/perf info that Android OS provides. You probably have seen similar information in your logcat console.

I/ActivityManager(770): START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.tns/.NativeScriptActivity} from pid 1030
I/ActivityManager(770): Displayed com.tns/.NativeScriptActivity: +3s614ms

Here are the results:

  • For Device1 the first start-up time was reduced from average 60.761 seconds to average 3.1419 seconds
  • For Device2 the first start-up time was reduced from average 39.384 seconds to average 3.541 seconds

A consequential start-up time for both devices is ~2.5 or less seconds.

What’s next

There is a lot of room for performance improvement. Currently NativeScript for Android uses JavaScript proxy object to get a callback when Java field is accessed or Java method is invoked. The problem is that proxy objects (interceptors) are not fast. We plan to replace them with plain JavaScript objects that have properly constructed prototype chain with accessors instead of interceptors. Another benefit of using prototype chains with accessors is that we will support JavaScript instanceof operator.

Another area for improvement is the memory management. Currently, we generate a lot of temporary Java objects which may kick the Java GC unnecessary often. Moving some parts of the runtime from Java to C++ is a viable option that we are going to explore.


In closing, I would like to say that we are astounded by how popular NativeScript has become in such a short amount of time. We have learned so much in the building the NativeScript runtime, and our experience in that process helps us improve NativeScript every single day.  We’re looking forward to version 1. Building truly native mobile applications with native performance using JavaScript is the future, and the future is now.

4 thoughts on “On NativeScript Performance”

    1. Not yet, but this is on my task list. I’ll try to include NativeScript vs Xamarin benchmarks in my next blog post.

    1. A direct comparison between Xamarin and NativeScript is not very helpful. In a way, it is like a comparison of apples to oranges. Straight to your question: Yes, we compared Xamarin and NativeScript. We did this very carefully, taking into a consideration each platform specifics. Xamarin rocks when it comes to code execution inside the Mono runtime. For example operations on System.IO.File are much faster than NativeScript calling, say, java.io.File. In this kind of scenarios there is no crossing of the Mono/Dalvik boundary. Your code runs 100% inside the Mono runtime which is very fast (in fact, Mono can be faster than Dalvik/ART, see https://blog.xamarin.com/android-in-c-sharp/). Also, don’t forget that Mono runtime provides much of the Dalvik functionality so there is no need for Mono to call into Dalvik. Hence there is no marshalling. This is not the case with NativeScript. NativeScript does not itself provide, say, file operations. It relies on Android/iOS platform. So there is always data marshalling when NativeScript has to cross JavaScript/native boundary. We cannot beat that. A well designed Xamarin app can execute more than 90% of the time inside the Mono runtime. However, when it comes to cross-boundary scenarios NativeScript is slightly faster than Xamarin. Both NativeScript and Xamarin must pay for the data marshalling so both platform are equally fast (or slow). In computationally intensive scenarios V8 and JavaScriptCore are as fast as Mono runtime. Back to your question, which one is faster: Xamarin or NativeScript? It depends. For example in NativeScript, once the UI is built there is not much (if any) marshalling. You can scroll, say, a ListView with 60FPS. In fact, it is as fast the native platform can be. As I said in this post, you can think of NativeScript as a way to command the native platform through JavaScript instead via Java or Objective-C.

Leave a Reply

Your email address will not be published. Required fields are marked *