Vladimir Prus


Tuesday, May 31, 2016

Desktops and Startups

I spent a good part of 2015 working on a desktop app for a startup, not a typical combination these days. If you're building a mobile app, there are multiple companies offering platforms for every task you need, and a of recommendations which platform to choose. On desktop, you have to build or pick everything yourself. In this post, I'll share some advice.

UI framework

What does one use to create cross-platform desktop applications these days? Let’s first look at standard options at each platform.

On Windows, there are two modern technologies:

  • XAML, .NET and C#. Good stack if you like C#. In our case, we have a shell extention too, which per Microsoft guidelines should not be written with managed code, but doing that one part in C++ is not very hard.

  • XAML, Windows Runtime and C++/CX. This is newer technology, apparently coming from the part of Microsoft that prefers C++. The C++/CX language is native, with no CLR runtime involved, and is pretty close to standard C++, and is therefore attractive if you're a mostly C++ engineer. On the downside, you're limited to Windows 8.1 and later. Also, the application will be a Metro-style one—while there we announcements that desktop apps will be possible soon, that did not happen yet. Shell extension or tray icon will likely be a big problem.

On OSX, there’s only one native stack, although you have the choice between Objective C (not a language I want to write core logic in, ever) and Swift (not a language I want to write core logic in, yet).

Two technologies attempt to be fully cross-platform:

  • Qt and C++ (using either Qt Widgets or Qt Quick for UI). Works, in theory, on all Windows versions and OSX versions in existence, with Qt Widgets extemely stable and Qt Quick maturing.

  • Xamarin and C# (using Xamarin.Forms and XAML). Also a nice stack, and compiles to native code, but requires Windows 8.1 or later.

Given the above, what do you do if you definitely want to reuse core logic level? Since we wanted to support Windows version prior to 8.1, this rules out C#, and we actually already had ths level written in C++ with Qt. And while putting XAML UI on Windows and Objective-C UI on OSX on top of C++ core would be possible, that would add fair number of incidental complexity. Therefore, I went with the Qt option, and since the UI was not going to be extremly complicated, decided to give Qt Quick a try.

Of course, this was a tradeoff between development speed and native look. I have originally produced Windows version, and then creating first OSX version took about a day—clearly good at a startup. On top of that, any interface changes were immediately available for both platforms, just a rebuild away. The UI did not look exactly native on OSX, but close enough for most people, and on Windows, there’s already enough of style variety that nobody had any concerns.

On the negative side, it turned that Qt Quick is not as mature as I’ve hoped. I have covered this in detail in the previous post.

Conclusion: Qt with Qt Quick is fine if you have little UI and is already using C++. Xamarin might be a viable option as well


We wanted the installation and update to be as transparent as possible, with no questions asked. On OSX, using DMG for initial installation is the standard solution, and autoupdate can be handled by a library called Sparkle.

On Windows, Microsoft documents two options. Windows Installer (MSI) is the standard technology, but does not support autoupdate, and any custom solutions has to display prompt dialogs. ClickOnce is a newer technology for .NET applications, supporting installing into per-user location without touching any system directory and transparent updates. Sadly, it does not appear to support native C++ applications, only C++ applications built for CLR. I’ve ended up writing a custom installer from scratch.

The installer downloads an update manifest (in the Sparkle format), then downloads an archive for the most recent version, checks signature, unpacks to a subdirectory of users AppData folder, and run the binary from there. Auto-update works much the same—detect that update manifest has changed, and start the installer again. When the new version of the application starts, it uses IPC to ask any previous version to shut down.

This approach works suprisingly well—most installation attempts succeed, and most auto-update attempts do indeed update. It is interesting to note how diverse environments are over the world. For example, downloading 20 megabytes is almost instance for me, but it takes minutes, and sometimes several tries, for users in other locations. Adopting binary deltas, like Chrome does, sounds a good idea.

Conclusion: on Windows, custom installer is a viable option for a native binary that does not modify system, and on OSX, Sparkle works out of the box.


I first run into Windows binary signing at Mentor, when we were building our P2 installer, and at one point, the IT-mandated anti-virus software started deleting our own binaries. It would only stop when we got a code signing certificate. This time, couple years later, we had as much as three obstacles. First, Chrome would flag our installer as risky on download. Second, Windows SafeFilter would refuse to run it. Finallay, finally the antivirus software would randomly wake up and break things. Naturally, we had to get code signing certificate too.

The code signing certificate involves organization validation. Not much of a problem for a large company whose address can be validated by Google maps. But a small startup with no landline, no office number on the door, and no utility bills is deemed suspect by the registars. It took a while for our CEO to build appropriate chain of trust.

I would say the whole system is quite pointless. Surely, organization validation makes it possible to pass an address and a phone number to police, in case good guys become bad overnight. However, it does absolutely nothing to protect against good-willing but incompetent guys — for example who leak signing keys, build on virus-infected machines or do not secure auto-update. And it does nothing to protect against really-bad guys, who surely can fake addresses and phone numbers.

Conclusion: if you plan to publish windows applications, get a code signing certificate as soon as possible.

Crash Reporting

Crash reporting is part of Winwdows for a while, but by default, only Microsoft gets to look at crashes. One can apply for a developer account with permission to look at your own crashes, but the instructions start with ‘get extended validation certificate from one of these 3 CA’. Extended validation is even more messy than organization validation, and Microsoft could well not approve us anyway, so I looked at other options.

Chrome includes a library called Breakpad and while they have no server-side, Mozilla fills the void with Socorro. The setup could be easiert—in particular there are no official Ubuntu packages, one should use specific deployment system, and it requires ELK stack. However, after a few hours I could get the first test crash reported, and was going to start using it for real when I came across DrDump.

That service provides a library to handle crashes, including UI, cloud backend to store crashes, and a web app to review the crashes. On a crash, minidump is sent automatically and user is prompted to submit a full dump, unless one was previously collected for the same stacktrace. Overall it took about an hour to integrate and play with, and it worked quite satisfactory since then. It was surprising, when asked about pricing, to hear that the service is free—reportedly because with matching crash stacktraces, the disk storage required for full dumps is quite low. I doubt it would be still free for an app with million users each hitting unique bugs, but it remains free for us.

Conclusion: DrDump is a fine solution for an early stage startup. Breakpad and Soccorro will be also fine when you have time for devops.


The mechanics of collecting analytics is easy. We use Mixpanel for mobile analytics and wanted to use for desktop apps as well. On mobile, there’s SDK that sends events, collects system properties and manages event queue. There’s no such option for desktop, but there is a REST API, which is sufficient. Adding basic system details like OS version is easy, and we did not bother implementing event queue. Surely some events do get lost, but it is not a big practical problem.

With the data in, Mixpanel makes it very easy to show a chart of event counts, filtered and segmented as you wish. Whether it is useful is not clear.

  • No chart comes with any statistical analysis. Want to check whether Windows 7 users really have lower conversion rate, and you need to do the math using other tools.

  • Box plot, the standard way to look at a relation between a continuous and a categorical variable, is nowhere to be seen.

  • You can only look at 90 days of data.

In essence, given that drawing basic charts is frictionless while statistical analysis demands external tools, Mixpanel lures you into making simplistic analysis. In the end, I have just exported everything and looked at data in R.

Conclusion: you likely don’t need any analytics services. It’s easy to send data into a database, aggregate it, and analyze using a decent statistical package.

Summing it up

While there's no compherehensive platform that helps to quickly launch a desktop application, there are enough pieces of technology that can be put together. The most important I've learned are:

  • Pick a cross-platform UI framework; you probably don’t have time to write separate code for different operating systems. I found Qt to work OK, but you might want to also consider Xamarin.

  • Get code signing certificate early

  • Do your own analytics. Decide what will be most important when you launch, figure what statistical tools you’d need, and create events to support that.

Thursday, May 19, 2016

Qt Quick on Desktop

I worked with Qt quite a bit over the years, but it was only in 2015 where I had a chance to do substantial work with Qt Quick. I wanted to share some impressions, specifically for desktop applications.


Here is an advance summary of the key points, starting with advantages:

  • Qt Quick is now a mature way to build desktop application that either use standard-looking desktop controls, or have relatively simple custom UI.
  • Data binding is quite pleasant in all ways. However, it's only limited to binding UI properties to expressions over model properties. Dynamically changing UI structure is possible, but is rather convoluted.
  • Animation system is solid and support for GL effects is much more convenient than writing GL directly.
  • For a pleasant surprise, it has a state machine built in, thought with some quirks.

But not everything is perfect:

  • The set of standard controls and styles could be larger. If you wish to achieve Metro design or Material design on desktop, you might need to use third-party extensions or do it yourself.
  • Styling merchanisms (in Qt Quick Controls 1) are quite limited, having neither inheritance nor attributes, and Qt Quick Controls 2 have something else entirely.
  • There are two different layout mechanisms, each with its own quirks.
  • As of Qt 5.5, High DPI support involved doing the math yourself. This might have improved since.
  • Open GL is required, and especially on Windows, the set of possible GL configuration is large, the documentation is imperfect, and there are "interesting" differences in behaviour.

What is Qt Quick?

Let's clarify some terminology first:

  • QML is a language that defines a tree of objects, along with some property bindings and executable code. It uses custom language for the tree proper, and Javascript for expressions and functions
  • Qt Quick is a set of basic visual components, and an engine to render them
  • Qt Quick Controls is a set of standard UI controls and layouts.

These are pretty much always used together, so I'll use "Qt Quick" throughout regardless of what layer I really talk about.

QML and Qt Quick

In order to illustrate how Qt Quick UI is put together, I'll use a simple part of registration UI, shown below

As you start typing a phone number a decorative line under the input fields turns into progress bar, and as you submit the form, the progress bar becomes indefinite one, and when error is returned, the hint below the input fields becomes an error display.

The progress bar is a custom component that I won't discuss in detail, but once but once written, it can be easily used:

CustomInput {
    id: phoneNumber
CustomPercentageLine {
    percentage: model.validPhoneLength == -1 ? 1 : phoneNumber.text.length/model.validPhoneLength
    animated: model.working

Here, model is an instance of C++ class that we've injected into QML. The binding expression checks whether a phone has fixed size, and if so, computes progress bar percentage. And if model is busy validating phone number, and its working property is true, the animated property of the view is also updated, causing progress bar to show indefinite state. Similarly, this is how hint and error display is implemented

CustomLabel {
    text: model.error ? model.error : "We'll confirm your number by sending a one-time SMS"

Data binding is certainly good, and QML offers certain syntactic convenience, as we can use JavaScript expressions with no escaping - unlike XML-based templating engines. This gets particularly important for larger expressions, like the one below:

CustomLabel {
    text: {
        if (model.error) {  return model.error; }
        if (model.state === "resendingPin") {
            return "Enter the previous code or wait for a new code to be sent";
        } else {
            return "Enter the code";
    color: model.error ? "#ce4844" : "#999ba4"

The animated property of my custom progress bar is used to indicate indefinite progress, and is shown by three rectangles moving across the blue progress line. Making the rectangles move is easy with the animation framework, for example there's the animation definition for the first rectangle:

NumberAnimation {
    id: animation1
    target: rectangle1
    property: "x"
    duration: 2000
    easing.type: Easing.OutCubic
    loops: Animation.Infinite
    from: -6
    to: parent.width

Animation for the second and third rectangles is similar, but they should start with a delay, so we need to use a separate timer item:

Timer {
    id: timer2
    interval: 500
    onTriggered: animation2.start()
    repeat: false

As soon as the animated property of progress bar is set to true, we start animating the first rectangle, as well as the timers that will start animating two others:


It would be more convenient if animation supported delayed start without auxilliary timers, but with exception of that, the mechanism is OK.

For managing larger UI changes, the state machine framework comes handy. For example, upon startup we show a spalsh screen and attempt to connect to backend server and check for authorization, showing the login screen only if necessary. If we can't connect quickly, we'll show a separate screen with progress bar. Here's the relevant code:

DSM.State {
       id: initial
       DSM.TimeoutTransition {
           targetState: waiting
           timeout: 10000
       DSM.SignalTransition {
           signal: model.connectedChanged
           guard: model.connected && !model.isAuthenticated()
           targetState: needPhone

It illustrates two features. First, it's possible to transition to a different state simply after a timeout. Second, it's possible to transition to a different state on a signal, if a particular condition is met. However the second example also shows one of the largest inconveniences: you can't easily transition when a particular expression becomes true. I'd much rather not bother thinking what signals are emitted, and write this:

DSM.SignalTransition {
    guard: model.connected && !model.isAuthenticated()
    targetState: needPhone

After all, tracking changes to expression already works for properties. There is a workaround to get this effect with auxilliary items, but it ought to be a standard feature. Still, having state machines as standard feature much simplifies UI logic.

Controls Styling

I'll start the critical part of my post with controls and styling. The below screenshots shows the controls gallery example with the two styles available to Windows desktop application in Qt 5.5.

You might wonder what is the difference between Desktop and Base styles. The Desktop style, which is the default, actually uses Qt Widgets style engine to render everything, so it looks exactly the same as the standard Qt Widgets. It's good if you want to take advantage of Qt Quick without changing the look, but creating custom styles in C++ certainly does not look very attractive.

One can easily switch to the Base style, implemented entirely in QML, but it it slightly less polished, and has quite of lot of hardcoded styling, such as:

Rectangle {
    radius: TextSingleton.implicitHeight * 0.16
    border.color: control.activeFocus ? "#47b" : "#999"

The literal 0.16 is repated in 8 places over several files in the base style definitions, while #47b is found at 14 locations - not what I'd call solid engineering, compared to say most CSS frameworks or Android style system, where you can make consistent changes with a few attribute declarations. There are various third-party solution that offer modern styles for Qt, including Ubuntu Components and Papyros, but these are not actually styles -- you need to use their own buttons and inputs and other components, you can't just restyle your existing code.

The set of controls is quite standard too. For example, there's not even implementation of filtered list view.


Qt Quick offers two approaches to layout. Anchor-based layout allows you to position an item relatively to another item - for example you can fill parent and add margin, or you can put an item to the right of another item. It's easy, but not very adaptive.

There's is also dynamic layout, where desired size of items and available geometry is used to place items on screen. That's what I used, and it mostly works, except for annoying inability to set margins - as the margins feature in anchor layout is actually specific to anchor layout. I ended up writing a custom component that specifically adds margin around content.

Not invented here

Both styling and layout issues above makes me thing that Qt Quick is being too original. It's a declarative UI language, so looking at CSS and trying to be more conceptually similar would have helped. Sure, CSS is not perfect, but a lot of people know how to use it to style everything from text to buttons to toolbars, and even create random geometric shapes. Common behaviours like borders, margins, and shadows are trivial to accomblish. Android also has declarative UI with what I find a better style system and layout mechanisms. XAML is also a popular solution. It would be great if Qt Quick was at least conceptually similar to one of these.

Also, given that QML is heavily based on JavaScript, one would imagine JavaScript modules would work. Maybe not the fancy async module systems, but just standard CommonJS modules. They don't work, with no progress since 2011, as you can see in the issue.

Overall, it would be great if Qt Quick were more aligned with other technologies. I have no idea whether it was possible given actual development timelines.

Quality of implementation

There are a couple of issues that were important, but not fundamental.

As of Qt 5.5, we found that High DPI support is not quite good. Every time you specify a position or size, it's in pixels, so for good look on a High DPI system, one needs to have an utility function to scale pixels according to physical resolution. This might be fixed in Qt 5.6, but I haven't had a chance to try.

Qt Quick uses Open GL for all rendering. Of course, it creates lots of possiblities, but also configuration problems. Say, on Windows you might end up with 5 different GL implementations, not counting different video card vendors. We ended up with quite some crashes from users that had 'GL' in backtrace, and could not be reproduced. We also had fun getting things to work under Virtual Box, at one point even reverting particular Windows update for things to start working. This should eventually improve, but beware for now.

Wrapping Up

I found Qt Quick to be a pleasant framework for developing a desktop app with a relatively simple interface. There were some limitations and issues, but they probably will eventually be fixed.

The biggest concern is that the set of controls and styles is quite basic, and that the styling mechanism is not quite flexible. Furthermore, the new direction is is Qt Quick Controls 2, and it is both focused on mobile devices and uses a different styling approach, so unlikely to bring improvements on desktop. If I were to use Qt Quick on a larger project, I would probably use a third-party controls library.

Qt remains the best framework for desktop applications using C++. Whether it's the best desktop framework overall, given that Xamarin recently was open-sourced, remains to be seen.

Friday, June 26, 2015

iPhone app clearance

I moved from iPhone to an Android phone recently, and among dozens of iOS apps I've tried over couple years, some deserve to be  mentioned. I include links where appropriate for convenience, but have no affiliation with any of the developers.


DayOne is a very nice journal application. I've used it to record what I've done each day, and then review entries weekly and copy into a different application. It served that purpose very well - the UI is clean, there's reminder feature, and swipe navigation between entries. It also supports sync via dropbox, so I could make entries on a phone and review on a tablet.

DocScan is an app to take a shot of a document with camera and convert it into PDF, as if  produced by flatbed scanner. Its primary benefit is perspective correction that works very well. The files can obviously be sent by email, or added to cloud storage, though I found the number of required clicks a tad large. There's also some automatic conversion to black-and-white but it never produced results I've liked, so I always disabled it.


Pretty much every fitness app is about using accelerometer and GPS to track steps and runs, which is quite boring, but I've came across two kinds of different apps.

Runtastic Push Ups is a push ups trainer and tracker. You put it on the floor, and it uses proximity sensor to count your push ups. Or, you can press the screen with you nose. That sound a bizarre idea, but I found it a fun in practice. They also make two other apps - Pull Ups and Squats - that use accelerometer to track something other than steps.

There are also apps that flash and built-in camera to measure heart rate. Cardiio does exactly that, while Azumio Stress Check determines your stress level using heart rate variability. The latter can be quite entertaining.


In this saturated app category, I've picked Deep Relax and Self. The former has about 40 different sounds you can mix, and set an arbitrary timer. The latter is beautifully minimalist, maybe too simple for me.


I've tried a crazy number of those, and only a few were still installed after 5 minutes. Those that survived were Step Journal, Charge, TnS, Lumen Trails and rTracker, but none of them ended up actively used. In particular, rTracker, which is widely praised, ended up ugly app that could not draw sensible charts.


This mini-review is concluded by the single game app I found installed, called Music Tiles, where one clicks on black tiles that scroll from the top, producing sounds, and trying to go as far as possible. That's surprisingly addictive, especially on long-haul flights.

Monday, June 01, 2015

Last Nine Years

Exactly nine years ago today, on June 1 2006, I have joined CodeSourcery, as the first Eclipse engineer. I had zero Eclipse and Java experience, but knowing KDevelop and GDB was deemed sufficient. Look like I did passably well, and for my part, I’m happy to have played a small part in a huge change to open-source embedded development tools.

Every current GDB tutorial for embedded development say to just load your binary to the target. It was my first big project, in 2006, to make it work, since GDB knew nothing about flash memory. I’ve ended up teaching it about memory maps, translating memory writes into flash erase and programming operations, throwing together support for some ColdFire chip, and finally adding a single checkbox in the UI.

GDB non-stop mode was entirely done by CodeSourcery. In this mode, each thread can be independently stopped, and examined, while others are running. There I’ve contributed to asynchronous processing of commands and reworking breakpoint machinery. We’ve made GDB handle breakpoints in constructors and function templates, implemented tracepoints, and different flavours of OS awareness. I was also part of initial prototyping for Python scripting.

On Eclipse side, we made just as many changes, but had less luck submitting them upstream, so describing them is similar to a research paper -  it tells what’s possible, but you’re on your own if you want an implementation. Still, we’ve made Eclipse scan for hardware debug device automatically, modified project wizard to include debug settings and create projects you can immediately debug, implemented a IDE editor for hardware board descriptions, and modified register view to effectively deal with thousands of memory-mapped registers. Among that, I did manage to create and submit a new Eclipse CDT view - OS Resources - that shows tables of different objects on the debugged system.

Between Eclipse and GDB, there’s a small interface called GDB/MI. It also saw significant changes, becoming less stateful, adding new notifications (so that Eclipse view don’t have to explicitly pull the data on each stop), and improving variable access methods.

In November 2010, CodeSourcery was acquired by Mentor Graphics and our product went on to became Sourcery CodeBench, the decision based in part on progress made by open-source tools in the previous years. Understably, a lot of work after that went into integration with other products - including Mentor’s hardware debug devices, profiling tools and Mentor Embedded Linux. Personally, I went on to lead the IDE team, learning how to run a full distributed team across 12 time zones. We were less active in the open-source for a while, but gradually returned, and one of the biggest recent contribution is a product installer based on Eclipse P2 we’ve announced in 2014.

And then technology went full circle. The most recent open-source contributions from CodeSourcery team are patches for LLDB-MI, a bridge between LLDB and Eclipse.

In 2006, I’ve joined CodeSourcery in part because at the previous position, there was no longer anything to learn. Over years, I worked with the best people in each area: Daniel Jacobowitz and Pedro Alves on GDB,  Carlos O’Donnel on Glibc and GCC, Mikhail Khodjaiants on Eclipse and of course Mark Mitchell, CEO who wrote a C++ frontend once. It was a great experience. Now, it's time to learn something new. See you there!

Monday, March 30, 2015

Branding Eclipse Products

Last year we worked on a new Eclipse-based IDE, in particular creating product branding from scratch. Despite visual editors and several existing online tutorials, that still proved confusing, so I've decided to documented what we've learned.

In this post, we'll review branding of a simple product. It has:
  • One functional plugin, and one functional feature including that one plugin
  • One product plugin and one product feature including the product plugin and functional feature
  • Product configuration
To make things simpler:
  • I use artwork we used in our products.
  • There's no localization of strings
  • The welcome/intro screen is almost neglected, since we use custom HTML for that, and it might be best approach for any new product.
Of course, I assume you know what is plugin and what is feature.

Functional features

Functional feature, along with contained plugins, implements some useful behavior that you can potentially install into any Eclipse-based product. For example, EGit feature is equally useful for Java and C++ IDE. For proprietary products, it is often tempting to just mix everything together, but creating separate features is often beneficial. The first example is of exactly such standalone feature, see sources

The important files in the plugin are META-INF/MANIFEST.MF, about.ini and about.html. For the feature, feature.xml and about.html are important. In particular, feature.xml relays most of the branding to a 'branding plugin', which, in our case, is the lonely functional plugin. 

Starting with that source, you can import it into your PDE, click on feature.xml, export feature into a directory, install it into a second Eclipse installation, and then "Help->About" dialog will look like this:

In the row of icons that represent feature providers, there is our "crystal ball" logo. The icon is defined by the branding plugin, via featureImage attribute in about.ini. Should you have multiple feature with the same icon (pixel-wise), they will be merged in this dialog. Clicking "Installation Details" gives us in-depth information:
This dialog shows root P2 installable units in the Eclipse instance. The 'name' column is taken from 'label' attribute of feature.xml, and the description below comes from the 'description' element in feature.xml.  The same description is shown when installing the feature. We can also use the "Properties" button to see more details from feature.xml, like copyright and license:
The above is fairly reasonable. The features tab, however, brings some surprises:
The "Feature Name" column is coming from branding plugin, the Bundle-Name attribute of MANIFEST.MF. The description below is composed from 'label' attribute in feature.xml and 'aboutText'  attribute in plugin's about.ini. The icon is also coming from about.ini - and if you specify icon attribute in feature.xml, it is ignored. Finally, the "License" button opens about.html file in the feature directory - which is generally different from license attribute in feature.xml. Clicking the "Plug-in Details" button shows branding information for plugins, which is rather simple:
The "Provider" and "Plug-in Name" fields correspond to Bundle-Vendor and Bundle-Name in MANIFEST.MF. The "Legal Info" button opens about.html in plugin's root directory. As an aside, I'm not sure why it's called "License" for features and "Legal Info" for plugins and "License Agreement" for installable units.


Products put together a set of functional features that make sense for a particular audience, and add particular overall branding. Physically, product consists of a product feature and product plugin, organized the same way as functional feature and plugin. The example source for that is here, which you can again import into PDE, export into P2 repository, and install into separate Eclipse instance, and then run it with the "-product com.codesourcery.seed.product.product" command-line option.

The key element of product branding is this extension in plugin.xml:
<product name="Example Eclipse Product" application="org.eclipse.ui.ide.workbench">
    <property name="appName" value="Example Eclipse Product"/>
    <property name="windowImages"  value="images/csl16.png,images/csl32.png,images/csl48.png"/>
    <property name="aboutImage" value="images/IDE_about.png"/>
    <property name="aboutText" value="About text for the example product."/>
The first two properties define outside appearance of the product - it's name, shown in the window title, and its icon, shown in taskbar, or launcher, or window switcher, depending on your OS. The other two attributes affect the about dialog box, making it look like below:
Now it actually looks like a custom product! The installation details in this case are almost the same, except that it has two features, one depending on the other:
There are several other properties in product definition that are related to welcome screen, but as I've said, we replace it completely, so I'm not going to describe it. The example source code has some definitions if you're interested.

Launcher and Product Build
The product feature we've built can be exported from PDE (or built with Maven, if you wish), and installed into Eclipse, but we usually want to build a complete product that can be immediately run. We need product configuration (.product file) for that, and it's covered in detail elsewhere. As far as branding goes, we only need two details:
  • Showing custom splash screen on startup
  • Starting our product
The product configuration specifies them in fairly direct way - the product is specified as attributes of the top-level 'product' element, and the splash screen becomes a command-line attribute to the launcher. In the exported product directory, two files control this behaviour. First, the eclipse.ini files in the root directory contains '-showsplash com.codesourcery.seed.product' for the splash screen. Second, the 'configuration/config.ini' specifies the product to run. That almost completes our product branding. 

Almost, because while product extension point can specify window icon and similar properties, the .product file also can specify those. When you do product export in PDE, the properties from .product files are copied into product extension point, so unless you duplicate them, you get product with no window icons. This problem is accounted for in the final version. We don't have this problem in practice, since we built the final product from the command line, and so splash screen and product id is the only branding we need in the '.product' file.

Do it yourself
I have put together a seed Eclipse product over at GitHub, and you are free to use it if you are creating a new product. I would suggest these tips:
  • Use high-resolution artwork, preferably created from vector originals, and keep those originals.
  • Having license in every about.html and every feature.xml is awkward. Either automate it, or refer to documentation for license terms.
  • Use the same label for each functional feature and its branding plugin
  • If you can get HTML support working on your target systems, use custom HTML instead of default welcome screen.
Hope this help!


Dmitry Kozlov has worked with me on this, while Sourcery Services allowed me to take time to summarize our experience.

Tuesday, January 27, 2015

Lean Analytics

Last year, I often needed to display and analyze timestamped events, such as product evaluations, issue tracker activity or credit card expenses. After trying a few approaches, I've ended up writing a JavaScript library called Lean Analytics. It's based on dc.js, crossfilter.js and D3.js, and looks like this:

The easiest way to understand it is to just play with the demo or take a look at the demo source code. Below I'll explain what it is, when you'd want to use it, and when not.


The primary goal was to just visually show the trends in already collected, but rather dry data. The amount of data is fairly small, dimensions are few, and there's no need to extract hidden correlations between dozens of values nor there's a need for dedicated analysts to tweak the charts on a full-time basics. Rather, I wanted it to be extra easy to chart new type of data, don't store anything in the cloud, and embed the charts in existing web apps.

The library itself is bundled into a single JavaScript file, plus you need to include 3 CSS files. You also need to write code to define where do get data, what metrics to show, and how to group your entries - all of which is straightforward. For that, you get a lot of fine-tuned visuals:

  • Chart showing main metric (such as transaction amount) aggregated per week, as well as derived metric (such as trendline). There are also dropdowns to select desired metrics.
  • Compact linear charts showing distribution of the chosen metric over categories.
  • Tabular view of the data.
  • Filtering of main chart by category values in real time in your browser. The filters are even stored as part of URL, so you can share links easily.
  • Buttons to select time ranges.
  • Automatic progress and error reporting for loading data.
The charts are meant to replace a div in your host HTML document, and they use Bootstrap for styling, so probably will work just fine inside your internal webapps.


DC.js is the foundation for Lean Analytics, and together with crossfilter, does all the hard stuff of filtering data in real time in your browser. It can be used to create way more interesting visualizations, but it requires a considerable amount of code to configure all the details - way more that I was comfortable with.

Several libraries are implementing charts on top of D3, such as NVD3 and C3. Sadly, those are not integrated with crossfilter, and are somewhat in a state of flux.

Google Charts is very solid as far as charting goes, but does not support any crossfiltering either.

Kibana is a full-blown dashboard solution, on top of ElasticSearch. It's certainly great for serious data analysis, but is both not trivial to setup, and is not embeddable in webapps.

Mixpanel is fairly nice, but it's a cloud service, and I did not want, or could not, put data in the cloud.

Zenobase, finally, is a very nice solution specific to lifetracking, to answer questions like "how is my blood pressure correlated with weight". It is inspiring in some ways, but is also a cloud service, and too specified for life tracking to be directly useful.


If you want to chart timestamped events with numeric values that are naturally aggregated over weeks, and you want to filter data by categories in real time, and the amount of data is not very large, give Lean Analytics a try.

Tuesday, December 23, 2014

Calendars and Timezones

It's boring to talk about calendars and time zones, but apparently major software companies still get this wrong, and a fair number of people end up very confused. For the latest example, in summer 2014, the time zone in Moscow, Russia was UTC+4, and it were to stay this way all year. Then the government decided to switch to UTC+3 in Autumn anyway, and I woke up with iPhone showing wrong time.

iOS: hardcoded time zone

Since iOS has time zone as part of OS, it still thought Moscow is UTC+4, so the clock was one hour later. That was easy to fix, I've changed iPhone time zone to a nearby UTC+3 one. And then, the calendar events started to randomly misbehave, showing one hour off, in different directions.

Google Calendar: works just fine

Suppose I create an event in Google Calendar, via web, at 18:00 Moscow (UTC+3). The invitation email to guests has a total of 4 MIME parts:
  • HTML part describing the event, with buttons to accept or decline
  • text part with a plain-text version of same
  • attachment with the application/ics content type, and invite.ics name
  • invisible part with text/calendar content type, and same content as invite.ics
Applications that understand invites will look at one of the last two parts, and see this:
That is, the invite specifies event type as 15:00 UTC, with no timezone information, which is the correct time.

iOS: using time zone name

When I access my Google Calendar from iPhone, I see the same event on 19:00. Apparently, when accessing Google Calendar via Exchange protocol, iOS receives time zone information of the event, as "Moscow", checks its outdated database, and decides the time is 18:00 UTC+4 - one hour later than it should be.

Exchange: totally confused

Like iOS, our corporate exchange did not know about recent changes, and still thought Moscow is UTC+4, I've switched it to a time zone called "(UTC+03:00)‎‎ Kaliningrad, Minsk". When I create event at 18:00 the invitation email has 2 MIME parts:
  • text part that briefly say "When: Saturday, December 06, 2014 19:00-20:00. (UTC+03:00) Minsk", which looks right
  • invisible part with text/calendar content type
That calendar data is just the opposite to what Google does, since it names time zone:
DTSTART;TZID=Kaliningrad Standard Time:20141205T180000
DTEND;TZID=Kaliningrad Standard Time:20141205T190000
Not only that, but the name of timezone is different from the text part, and the timezone itself is also defined inside the content, as UTC+2, so that makes for 18:00 UTC+2 - one hour earlier than it should be. I don't have a good theory how it can be that broken.


What I ended up doing, and what you should do in similar situation, is find a timezone with the right offset and DST rules, but far away geographically and politically. I've ended up using Madagascar iOS and Nairobi in Exchange. Then, you should make sure that every single calendar system you use is switched to that timezone:
  • In Google Calendar, under Calendar Settings, modify Country and then Time Zone.
  • On iOS, modify Settings → General, Date & Time.
  • On iOS, also modify Setting → Mail, Contacts,Calendars → Time Zone Support.
  • In Exchange, or rather Outlook Web App, modify Settings → Regional → Current Time Zone. You can also go to Settings → Calendar and click "change your work week to the current time zone", though that does not matter much.