Software Performance Engineering

Every time we build software we have functional requirements. Maybe these functional requirements are not well defined but they are good enough to start prototyping and you refine them as you go. After all, the functional requirements describe what you have to build. Sometimes, you have additional requirements that describe how your software system should operate, rather than how it should behave. We call them non-functional requirements. For example if you build a web site, a non-functional requirement can define the maximum page response time. In this post I am not going to write about the SPE approach, but rather about the more general term of software performance engineering.


Anytime a new technology emerges people want to know how fast it is. We seem obsessed with this question. Once we understand the basic scenarios where a new tech is applicable we start asking questions about its performance. Unfortunately, talking about performance is not easy because of all misunderstanding around it.

Not quick. Efficient.

Let’s set up some context. With any technology we are trying to solve some problem, not everything. In some sense, any technology is bound to the time. In general, nowadays we solve different problems than 10 years ago. This is so obvious in the IT industry as it changes so fast. Thus, when we talk about given technology it is important to understand where it comes from and what are the problems it tries to solve. Obviously, a technology cannot solve all current problems and there is no best technology for everything and everybody.

We should understand another important aspect as well. Usually when a new technology emerges it has some compatibility with an older one. We don’t want people to learn again how to do trivial things. Often this can have big impact on performance.

Having set up the context, it should be clear that the interpreting performance results is also time bound. What we have considered fast 5 years ago, may not be fast any longer. Now, let’s try to describe informally what performance is and how we try to build performant software. Performance is a general term describing various system aspects such as operation completion time, resource utilization, throughput, etc. While it is a quantitative discipline it does not define itself any criterion what is good or bad performance. For the sake of this post, this definition is good enough to understand why performance is important in say, algorithmic trading. In general, there is a clear connection between performance and cost.

IT Industry-Education Gap

Performance issues are rarely measured in a single value (e.g. CPU utilization) and thus they are difficult to understand. These problems become even more difficult in distributed and parallel systems. Despite the fact that performance is important and difficult problem, most universities and other educational facilities fail to prepare their students so they can avoid and efficiently solve performance issues. The IT industry has recognized this fact and there are companies like Pluralsight, Udacity and Coursera that offer additional training on this topic.

In the rare cases where students are taught on localizing and solving performance issues, they use outdated textbooks from ’80s. Current education cannot produce well-trained candidates mostly because the teachers have outdated knowledge. On the other hand, many (online) education companies offer highly specialized performance courses in say, web development, C++, Java or .NET, which cannot help students to understand the performance issues in depth.

Sometimes the academia tries to help the IT industry providing facilities like cache-oblivious algorithms or QN models but abstracting real hardware often produces suboptimal solutions.

Engaging students in real-life projects can prepare them much better. It doesn’t matter whether it is an open-source project or a collaboration with the industry. At present, students just don’t have the chance to work on a big project and thus miss the opportunity to learn. Not surprisingly the best resources on solving performance issues are various blogs and case studies from big companies like Google, Microsoft, Intel and Twitter.

Performance Engineering

Often software engineers have to rewrite code or change system architecture because of performance problems. To mitigate such expensive changes, many software engineers try to employ various tools and practices. Usually, these practices can be formalized in the form of an iterative process which is part from the development process itself. A common simplified overview of such iterative process might be as follows:

  • identify critical use cases
  • select a use case (by priority)
  • set performance goals
  • build/adjust performance model
  • implement the model
  • gather performance data (exercise the system)
  • report results

Different performance frameworks/approaches emphasize on different stages and/or techniques for modelling. However, what they all have in common is that it is an iterative process. We set up performance goals and perform quantitative measurements. We repeat the process until we meet the performance goals.

Most performance engineering practices heavily rely on tools and automation. Usually, they are part from various CI and testbed builds. This definitely streamlines the process and helps the software engineers. Still, there is a big caveat. Building a good performance model is not an easy task. You must have a very good understanding of the hardware and the whole system setup. The usual way to overcome this problem is to provide small, composable model templates for common tasks so the developer can build larger and complex models.

In closing I would say that there isn’t a silver bullet when it comes to solving performance issues. The main reason to make solving performance issues difficult is that it requires a lot of knowledge and expertise from both software and hardware areas. There are a lot of places for improvement both in the education and the industry.

Object Oriented Programming: An Evolutionary Approach

This post is not about the book Object Oriented Programming: An Evolutionary Approach by Brad Cox. I decided to use the book’s title because the author nailed the connection between software and evolution. It is a good book, by the way. I recommend it.

Last week a coworker sent me a link to the React.js Conf 2015 Keynote video in which they introduced React Native. Because I work on NativeScript, I was curious to see how Facebook solves similar problems as we do. So, I finally got some time and watched the video. The presentation is short and probably the most important slide is the following one.


But this blog post is not about React Native. The thing that triggered me to write is something that the speaker said (the transcription is mine).

This is a component. This, we feel, is the proper separation of concerns for applications.

I couldn’t agree more. Components have been around for many years (don’t get me wrong, I am not bringing up one of those everything new is well-forgotten old themes). Yet, components and component-based development are not as widely accepted as I think they should be. It is probably because so many people/companies saw a value in software components and started defining/building them as they think it is the right way. And this process brought all the confusion what a component really is.

Beside the fact that the term component is quite overloaded it is important to note that many had tried to (re)define it in different times and environments/contexts. Nevertheless, some properties of software components were defined in exactly the same manner during the 70’s, 80’s, 90’s and later. Let’s see what these properties are.

  • binary standard/compatibility
  • separation of interface and implementation
  • language agnostic

These are some of the fundamental properties of any software component. More recent component definitions include properties like:

  • versioning
  • transaction support
  • security
  • etc.

But this blog post is not about software components either. It is about software evolution. We can define software evolution as a variation in software over the time. This definition is not complete but it is good enough. It can be applied on different levels, whether it is the software industry as a whole or a small application. It is important to say that software evolution is a result of our understanding about software, including an exact knowledge, culture and beliefs, at any point of time. We can reuse the analogy of terms like mutation, crossover, hybrid and so on to describe processes in software evolution.

Combining different ideas is one of the primary factor for software evolution. And this is where we need software components. There is a common comparison between software components and LEGO blocks. The analogy of software genes might be another alternative. An application DNA is defined by its genes.

Software evolution is not a linear process. Do you remember Twitter’s dance between client-side and server-side rendering? It is a great example of survival of the fittest principle. So, what will be the next thing in software evolution? I don’t think anybody know the answer. So far, the software components seem to be a practical way to go. Seeing big companies like Facebook to emphasize on composability is a good sign.

The best way to predict your future is to create it.
– Abraham Lincoln

The Quiet Horror of instanceof Operator

During the last months I was busy with NativeScript more than ever. While my work keeps me busy with embedding V8 JavaScript engine I rarely have the chance to write JavaScript. Recently I had to deal with mapping Java OOP inheritance into JavaScript and more specifically I had to fix a failing JavaScript unit test which uses instanceof operator. So I grabbed the opportunity to dig more into instanceof internals.

It is virtually impossible to talk about instanceof operator without mentioning typeof operator first. According MDN documentation

The typeof operator returns a string indicating the type of the unevaluated operand.

As described typeof operator does not seem useful. Probably the most interesting thing the use of unevaluated word. This allows us to test whether particular symbol is defined. For example

if (typeof x !== 'undefined')

will execute without ReferenceError even when x is not present.

Let’s see instanceof documentation

The instanceof operator tests whether an object has in its prototype chain the prototype property of a constructor.

After digging into instanceof operator I was even more puzzled. While typeof operator was introduced since the first edition of ECMAScript it seems that language designer(s) didn’t have clear idea about instanceof operator. It is mentioned as a reserved keyword in the second edition of ECMAScript and it is finally introduced into the third edition of ECMAScript. The operator definition is clear but I have troubles finding meaningful uses. Let’s see the following common example.

if (x instanceof Foo) {;

I feel uneasy with the assumption that if x has Foo‘s prototype somewhere in its prototype chain then it is safe to assume that bar exists. Mixing properties of nominal type system with JavaScript just doesn’t seem intuitive to me. I guess there are some practical scenarios where typeof and instanceof operators are useful but my guess is that their number is limited.

On Agile Practices

I have recently read the article What Agile Teams Think of Agile Principles from Laurie Williams and it got me thinking. The study conclusion is as follows:

The authors of the Agile Manifesto and the original 12 principles spelled out the essence of the agile trend that has transformed the software industry over more than a dozen years. That is, they nailed it.

Here are the top 10 agile practices from the case study.

Agile practice Mean Standard Deviation
Continuous integration 4.5 0.8
Short iterations (30 days of less) 4.5 0.8
“Done” criteria 4.5 0.8
Automated tests run with each build 4.4 0.9
Automated unit testing 4.4 0.9
Iterations review/demos 4.3 0.8
“Potentially shippable” features at the end of each iteration 4.3 0.9
“Whole” multidisciplinary team with one goal 4.3 0.8
Synchronous communication 4.4 0.9
Embracing changing requirements 4.3 0.8

These are indeed practices instead of exact science and I am going to elaborate more on this topic. But first I would like to recap a few things from the history of the software industry.

Making successful software is hard. Many software projects failed in the past and many software projects are failing now. There are a lots of studies that confirm it. Some studies claim that more than 50% of all software projects fail. In order to improve the rate of successful projects we tried to adopt know-how from other industries. The software industry adopted metaphors like building software and software engineering. We started to apply waterfall methodologies and rigorous scientific methods for defining software requirements like UML. We tried many things in order to do better software but not much changed.

Then people came up with the idea of agile software methodology. Agile manifesto states:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Agile methodology proposes different mindset. We started put emphasis on things like creativity and self-organizing teams more than engineering. Nowadays, we use metaphors like writing software much more often than 15 years ago. Some people go further by comparing programmers with writers and consequently in order to do good software we need good writers instead of good engineers. While I find such claims a bit controversial they are many people who share similar opinions. In general, today we talk about software craftsmanship instead of software engineering.

These two approaches are not mutually exclusive. I see good trends of merging both of them whenever it is reasonable. It is natural for people to select the best from both worlds. Still, agile methodology is considered young. Most of the software companies still publish their job offerings as “Software Engineer Wanted” instead of “Software Craftsman Wanted“. This is only one example of what we have inherited in IT industry. It does not matter how much an IT company boasts how agile it is, the fact is that we need time to fully adopt the new mindset. The good thing is that the new mindset focuses on the individual and I think this is the key for better software.

Software Razzie Awards

This is not a new idea. Every now and then someone suggests it. Apparently I hear about it more often than, say, 5 years ago. Sure, today software is more spread than 5 years ago but somehow I don’t think this is the only reason.

I guess the main reason for people dissatisfaction is that nowadays people have higher expectation of the product’s quality. Let’s take for example the mobile apps. Everyone expects mobile apps to work fast and smooth and to provide good user experience. These expectations are transferred to the PCs as well. What was acceptable 5 years ago is not anymore.

Also I don’t think software became worse with time. Sure, there are products affected from software bloat. Some notorious examples are Nero Burning ROM and iTunes. Although software bloat manifests in APIs (e.g. Win32 API and Linux Kernel API) and frameworks (e.g. JDK and .NET) there is a tendency that most developers try to minimize and control their code. As a result a new wave of lightweight software (e.g. Google Chrome, Node.js, nginx, etc.) becomes popular.

So, what if there are Software Razzie Awards? There are Pwnie Awards but they are very security focused. I still cannot decide if such awards can be stimulating for the IT industry or not. I guess it won’t harm anyone.

NUI or GUI or … not

Today there are a lot of devices (smartphones, tablets, handheld game consoles, etc.) with support for gestural interface. We often call such interfaces natural user interfaces (NUI). These interfaces are in contrast with the traditional graphical user interfaces (GUI). In this post I am going to share my thoughts and experience with NUI.

Every software engineer knows that one should not make definitive assumptions how IT systems will be used. Often IT systems are used in unexpected and unpredicted ways. However such assumptions are rarely considered when it comes to user interfaces.

Natural user interfaces strive to offer more intuitive and easy human-technology interaction. This sounds all good and nice but lets focus on the word natural. In my understanding, natural means that the user doesn’t have to use artificial input/interaction devices such as keyboard and mouse. However, this does not mean that NUI is easy and intuitive for everyone. Users have still to learn it. While NUI is crucial for fast adoption of a new product it doesn’t mean it is easy to achieve it.

Today’s NUIs are not natural. Every company provides its own NUI standard and its per se artificial gestural language. Trying forcefully to apply NUI standard is not a solution. Cultural aspects should be preserved and considered. For example, almost every smartphone does not have support for left handed people.

My experience with different Android, iOS and Windows (Phone) 8 devices shows me that there is inconsistency between all of them. Often different applications for a particular platform use different gestures for the same command. Sometimes the companies use different NUI vocabularies for the same gesture. This could be confusing for the users.

In closing, I think that natural interfaces should allow us to interact with devices in the manner we interact with objects in everyday life. The devices should be able to learn user’s natural gestures/language and adapt to it. The companies should provide their natural interfaces as a fallback option.


Thoughts on C# Compiler

In 2010 I wrote the blog post Fun with pointers in C#. Back then, I thought it was fun. Today, I am not so sure. Lets take a look at the following code fragment:

using System;

namespace ClassLibrary1
    public class Class1
        unsafe public void Method1(ref object* obj)

If you try to compile this code with C# compiler distributed with .NET 1.1 you will get the following error:

error CS1005: Indirection to managed type is not valid

This is all good and nice because error CS1005 is aligned with the C# language specification which states:

Unlike references (values of reference types), pointers are not tracked by the garbage collector—the garbage collector has no knowledge of pointers and the data to which they point. For this reason a pointer is not permitted to point to a reference or to a struct that contains references, and the referent type of a pointer must be an unmanaged-type.

An unmanaged-type is any type that isn’t a reference-type or constructed type, and doesn’t contain reference-type or constructed type fields at any level of nesting. In other words, an unmanaged-type is one of the following:

sbyte, byte, short, ushort, int, uint, long, ulong, char, float, double, decimal, or bool.

Any enum-type.

Any pointer-type.

Any user-defined struct-type that is not a constructed type and contains fields of unmanaged-types only.

However since .NET 2.0 Microsoft introduced a bug that allows you to compile the code fragment above. Currently, I use C# compiler version 4.0.30319.17929 and I can still compile the code. If you run peverify.exe tool on the produced assembly you will get the following error:

[IL]: Error: [ClassLibrary1.dll : ClassLibrary1.Class1::Method1][offset 0x00000001]
Unmanaged pointers are not a verifiable type.
1 Error(s) Verifying ClassLibrary1.dll

Mono C# compiler (version 3.0.6) does it right. It fails, as expected, with error CS0208:

Class1.cs(7,40): error CS0208: Cannot take the address of, 
get the size of, or declare a pointer to a managed type `object'

Considering the fact that .NET 2.0 was released in 2005 it is hard to believe that Microsoft has not fixed the bug for the last 7 years. The only explanation I have here is that the bug is quite esoteric.

Technical Debt

There are a lot of articles explaining what technical dept is, so why another one? A lot of smart people has written about it (see the references at the end of the post). Despite of this, technical debt seems to be a hot topic over and over again and here I put my two cents in.

I like Eric Allman’s article Managing Technical Debt. My favorite quote from it is:

Technical debt is inevitable.

Some may find this statement a bit controversial. I had my own successes and failures in trying to manage the technical debt. My experience shows that one can avoid technical debt for small and simple projects only. However, nowadays we often work on large and complex projects. As a result we start our projects with insufficient understanding about them which in turn naturally leads to acquiring technical debt.

Lets define what technical debt is. Usually technical debt is defined as bad coding practices and “dirty hacks” that patch the software product instead of building it. Most of time the technical debt is attributed to lazy and/or inexperienced software developers. Often the reasons for acquiring technical debt are project specific but the most common are project cost, short deadlines, lack of experienced software engineers and so on.

A lot of managers and software developers are afraid of taking technical debt. I don’t think technical debt is a scary thing in case it is well-managed. The well-managed technical debt could save time and money. Today customers buy features, they are usually not interested in maintaining the source code. Shipping the right set of features on time can be a huge win for everyone.

It is all about the risks and managing technical debt. Unmanaged technical debt can be devastating. It tends to accumulate until you cannot pay it back. Every effort to maintain and/or extend the source code becomes harder. Eventually it slows the project and in the worst case the project is cancelled.

Sometimes acquiring technical debt cannot be observed or predicted. One can take technical debt intentionally or unintentionally. Unintentional technical debt could be dangerous if remains unnoticed for a long time. Intentional technical debt also can be dangerous if the risks taken are high. Martin Fowler provides a practical decision making approach when to take technical debt (see the references).

Common practices that help for better technical debt management are:

  • experienced software developers on the team
  • reasonable ship date/deadline
  • short release cycles
  • automate as many as possible simple-but-tedious tasks
  • spread knowledge across the team/remove one-person-bottlenecks

In closing, I think well-managed technical debt is a good thing. As every debt it allows you to do important things right now and pay the cost later. Consequences from bad technical debt are yet another reason to improve our skills in technical debt managements.


  1. Ward Cunningham, The WyCash Portfolio Management System
  2. Steve McConnell, Technical Debt
  3. Martin Fowler, Technical Debt Quadrant
  4. Eric Allman, Managing Technical Debt
  5. Wikipedia, Technical Dept


It’s all about the version

Have ever you noticed that people are obsessed by versioning the things? Yep, it’s true. Today, everything has a version. Even you. I am not sure why the versioning is so important. I guess it is related to the notion of improving the things.

Here is a short list of what I encounter most often in my job.

  • Web (1.0), Web 2.0, Web 3.0
  • Project Management (PM 1.0), PM 2.0, PM 3.0
  • ALM (1.0), ALM 2.0, ALM 3.0
  • Agile (1.0), Agile 2.0
  • Scrum (1.0), Scrum 2.0
  • UML 1.x, UML 2.x

I can easily add a few dozen more items to the list. The things I selected can be classified as ideas, concepts, processes and standards. I won’t bother showing you software product versioning. However you may be surprised to see that versioning is applied to other things.

  • Science 2.0
  • Health 2.0
  • Business 2.0
  • Enterprise 2.0

Versioning is applied even to food.

  • Milk 2.0
  • Bread 2.0

It’s not that we version the things around us only. We version ourself as well.

  • Übermensch
  • Human 2.0
  • People 2.0

I am still playing with the idea of what would be the next version of me. Or you 😉