A Vim errorformat for `firebase deploy`

In this post I share the lessons I learned from writing a compiler plugin for Vim that can be used when deploying to Firebase. I dissect the errorformat suitable for extracting errors from the output of `firebase deploy`.

Vim has a very useful feature called Quickfix. By way of a very cursory introduction, Quickfix is a specialized window that collects and parses the output from a command that you run. Every line in Quickfix acts as a hyperlink to a particular line and column of a given file. What makes this so useful is that you can use it with any conceivable command that potentially produces many "hits" and Quickfix lets you jump from one "hit" to the next and back at will.

A few examples:

  • grep - when you are looking for all occurrences of a particular string or regex over an entire codebase
  • Compilation - when you are building code and want a clean, uncluttered list of all the errors
  • Testing - when you run a test suite and want to see any test failures that occurred
  • Linting - similar to compilation, but this is such a frequent use case that I wanted to mention it on its own.

The Quickfix window gets populated by default when you run :vimgrep, :grep or :make. A lot of vim plugins tie into Quickfix, too. For example, the fugitive.vim plugin adds a :Ggrep command, which is a wrapper for git grep, so you can browse all matches from your current git repository right in the Quickfix window (though that is but a side benefit of all the capabilities this plugin offers).

The :make command is special, though, in that it is completely configurable. If you don’t customize it, :make will look for a Makefile in your working directory and run it just as you would expect. However, you can harness the power of :make by setting two variables: makeprg and errorformat.

The first variable, makeprg, simply tells :make what external program or script to execute, and of course you can configure your command arguments as dynamically as you want. The second variable, errorformat, is where all the parsing magic happens and is the topic of this post.

Out of the box, vim comes with support for a lot of different programming languages, compilers and frameworks, and if you start tapping into the plugin ecosystem you can build, compile, test or lint to your heart’s content and never even know about makeprg and errorformat. At least, that’s how it was for me over several years. Then, I finally hit a point where I got curious about how to bend the Quickfix window to my will.

I’ve been writing some code for Google Cloud Platform’s Firebase Functions, and for a while my development cycle looked like this:

  1. Edit my code
  2. firebase deploy
  3. Fix any errors
  4. If there were no errors during deployment, test my code in Firebase
  5. Repeat

I won’t get into how inefficient this cycle is, and what I’ve discovered to make it much tighter, because that would be a whole different story.
But for the sake of this story, suffice to say that steps 1, 2 and 3 were happening in vim. Actually, neovim, so that step 2 would happen in a terminal window right inside nvim itself.
Firebase deploy is the crux of the problem: it takes about 15 seconds to finish if it finds an error, and about 1 minute if it succeeds.

So, in order to automate and make this whole cycle a little more asynchronous, I first used Tim Pope’s vim-dispatch plugin, which introduces an asynchronous variant of :make, invoked with a capital M (:Make). Then, using an autocommand for the BufWritePost event and targeting just the file I was working on, I would start :Make! in the background every time I saved the file. Thus, I could keep working and when firebase deploy was done it would tell me at the bottom of the screen, whether successfully or with an error.

This was a nice step in the right direction, but up to this point I left the errorformat variable completely untouched. Thus, even if an error occurred during deployment, the Quickfix window remained empty and useless. I would still have to look at the output of :Make to see what error firebase deploy had reported and where it was.

I did not really search for a firebase compiler plugin for vim on the net. I figured, the time had come for me to finally learn how errorformat works. I have always had a soft spot for regular expressions, but it turns out that the structure of errorformat goes way beyond simple regexes. So it was still a bit challenging to figure out.

Here is a sample run of firebase deploy. I intentionally introduced a syntax error in my code so I could show you how the error is reported:

> time firebase deploy
⚠  functions: package.json indicates an outdated version of firebase-functions.
Please upgrade using npm install --save [email protected] in your functions directory.

=== Deploying to 'myproj-b00de'...

i  deploying functions
i  functions: ensuring required API is enabled...
✔  functions: required API is enabled
i  functions: preparing functions directory for uploading...

Error: Error occurred while parsing your function triggers.

class (DataManager {

SyntaxError: Unexpected token '('
    at wrapSafe (internal/modules/cjs/loader.js:1067:16)
    at Module._compile (internal/modules/cjs/loader.js:1115:27)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1171:10)
    at Module.load (internal/modules/cjs/loader.js:1000:32)
    at Function.Module._load (internal/modules/cjs/loader.js:899:14)
    at Module.require (internal/modules/cjs/loader.js:1040:19)
    at require (internal/modules/cjs/helpers.js:72:18)
    at Object.<anonymous> (/home/me/code/assistant/myproj/functions/functions/index.js:40:22)
    at Module._compile (internal/modules/cjs/loader.js:1151:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1171:10)

Having trouble? Try firebase [command] --help

real    0m14.499s
user    0m13.276s
sys     0m1.740s

As you can see from the above output, the error is in line 4 of the file data.js. The column in which the error was spotted is marked with a caret in the next line. Then you can see the description of the error (Unexpected token ‘(‘) and finally a traceback which we don’t really care about.

All of that information gets distilled in my Quickfix window to the following single line:

data.js|4 col 7 error|  SyntaxError: Unexpected token '('

And by hitting enter with the cursor on the line above in the Quickfix window, vim takes me straight to line 4, column 7 of data.js, where the error is.

And here is my firebase compiler in all its glory:

 > ~/.config/nvim/after/compiler/firebase.vim:
 line numbers are not part of the file
 lines starting with " are comments,
 giving examples that would be matched by the pattern in the next line

 1| let current_compiler="firebase"                                                                                                                                                                                                                                        
 2| CompilerSet makeprg=firebase\ deploy

 3| "/home/me/code/assistant/myproj/functions/data.js:4       
 4| CompilerSet errorformat=%E%f:%l

 5| "      ^                                                                            
 6| CompilerSet errorformat+=%-C%p^

 7| "SyntaxError: Unexpected token '('                                                  
 8| CompilerSet errorformat+=%+Z%[%^:\ ]%#:\ %m

 9| "class (DataManager {                                                               
10| CompilerSet errorformat+=%-C%.%#

11| "=== Deploying to 'myproj-b00de'...
11| CompilerSet errorformat+=%-G%.%#

Line 1 allows me to just say :compiler firebase, which is a shortcut to setting the two variables makeprg and errorformat in a single step.
Line 2 sets makeprg to the command firebase deploy. The space has to be escaped with a backslash.

Lines 4-11 all build the errorformat in several successive steps. Notice the first line has an equal sign, but the subsequent ones use +=, to append to the previous value. This is not necessary - we could just assign all the patterns to errorformat, separating them with commas. But that would be much harder to read:

CompilerSet errorformat=%E%f:%l,%-C%p^,%+Z%[%^:\ ]%#:\ %m,%-C%.%#,%-G%.%#

Apart from legibility, another benefit of building errorformat incrementally is that we can adjust the priority given to each pattern by moving them up or down, to control whether a pattern matches before or after another one. Plus, for debugging purposes, we can just comment out any patterns.

An errorformat is a comma-delimited list of patterns. Each line in the output of makeprg is matched to every pattern in errorformat until one matches. The patterns use a %-based token notation similar to the scanf format string in C. You can also see a comment above each pattern, which shows an example string that would be matched.


If this patterns matches, set a multi-line flag (%E): match a file path, a colon, and a line number.


Only consider this pattern if the multi-line flag is set (%-C), and if it matches, do not add it to the error message (the - in the %-C token): if there is caret preceded by white-space, ‘-’ or ‘.’ characters, count them and set that number as the column number;

%+Z%[%^:\ ]%#:\ %m

Only consider this pattern if the multi-line flag is set, and if it matches, clear the flag (%+Z): match the line if it starts with a string without white-space characters followed by a colon and a space. Save the remainder of the line as the error message (%m).


Ignore any other line if the multi-line flag is set (%-C).


Ignore all the informational lines before and after the error.

I found the following pages useful in exploring this topic:

Finally, I want to mention that I also found Tim Pope’s Projectionist plugin very useful to segregate these customizations to just the projects in which they were needed, instead of getting applied globally.

Retrieve Plone's Navigation Tree Using REST API

I recently released collective.restapi.navigationtree to the Python Package Index, an add-on extending Plone's REST API with an endpoint that returns the site's navigation tree down to a configurable depth.

Plone has a beautiful RESTful Hypermedia API, but its @navigation endpoint (see docs) is a bit too simplistic. As evidenced by the fact that the vast majority of websites out there have one form or another of dropdowns in their main navigation menus, going beyond the top level menu is almost non-negotiable. But @navigation does not offer this ability. What is one to do?

Well, after opening an issue in github, I decided to create a separate add-on as a proof of concept:

collective.restapi.navigationtree - PyPI

Tests are included to make sure it runs on Plone 4.3.latest, 5.0 and 5.1, for both Archetypes and Dexterity.

By default, Plone does not provide dropdown navigation menus. But pretty much every Plone site I have ever worked on has webcouturier.dropdownmenu installed to fill this gap. So I borrowed some of its code to generate the JSON response, and introduced a new endpoint called @navigationtree.

Currently, collective.restapi.navigationtree depends on webcouturier.dropdownmenu (as well as plone.restapi, of course), but my assumption was that if you need the former, you probably already have the latter installed. I also lean on webcouturier.dropdownmenu's configuration, in particular its dropdown_depth parameter. So you will get the same depth of your navigation tree in the JSON response as the site's menu. However, I'm already rethinking this dependency. It would be much cleaner to just add a query parameter to the endpoint to specify the desired tree depth than to rely on an external add-on's configuration. At some point I will release a new version with the dependency on webcouturier.dropdownmenu stripped out.





Listing the Kernel Versions of All Your Hosts With Ansible

A very simple Ansible playbook that allows you to dump the distribution and kernel version of all the hosts in your inventory to a local file.

If you want to quickly find the exact kernel versions of a large number of hosts, Ansible is the perfect tool.  It will save you from having to manually log in and run uname -ir on each one, and copy and paste the results in some local file.

I am going to share a little Ansible playbook below, which I came up with just the other day.  The impetus came in the form of an announcement from DigitalOcean about the Spectre and Meltdown vulnerabilities.

While tinkering with Ansible, I discovered the hostvars dictionary, an awesome data structure containing every last detail about the operating system of each host in the playbook's inventory.  hostvars is populated in the gather_facts step of a playbook execution.  There are two items in a host's hostvars data  which I  needed:

  • ansible_distribution_version:  this contains the version of the host's particular OS distribution.  All my hosts are running Ubuntu, and the values for me are 14.04, 15.04 and 16.04.
  • ansible_kernel:  this is the kernel version currently running, e.g. 3.13.0-141-generic.

The playbook contains two hosts sections. One for all, the only purpose of which is go through the gather_facts step.  The second hosts section is for localhost.

Starting from the bottom, the end result we are going for is to write the distribution version and the kernel version for each host into a local file. We can create a file using the template action and an appropriate jinja2 template. We only want one file, and we want it locally, hence the first reason for this hosts: localhost section. Otherwise, we would create a file on each of the remote hosts in the inventory.

We want our template to render the contents of a dictionary into which we have stored all the version information we have gathered from our hosts. So let's create this dictionary with a set_fact: task. We can use the with_inventory_hostnames iterator, which lets us loop over all the hosts and puts each hostname in the item variable. In this loop, we update the versions dict using the following syntax:

{{ versions | combine( { item: somevalue } ) }}

The python equivalent would be:

versions.update( { item: somevalue } )

or in other words:

versions[item] = somevalue

Remember, item is a hostname, and in place of somevalue we want to put a string containing both the distribution and the kernel version.

If we remember to initialize the versions variable to be an empty dict at the beginning, we have all the pieces we need.


The dumpall.j2 template is very simple:


Download them both:

The output looks something like this:


I heavily borrowed from these StackOverflow posts:

How to replace Plone's default search page with a faceted search

Faceted search, as provided by eea.facetednavigation, offers many advantages over Plone's default search page. Thanks to the Zope Component Architecture, swapping out the default search page for a customized faceted search page is only a few quick steps away, as this Howto demonstrates.


  • A Plone site with eea.facetednavigation and plone.api installed
  • A faceted search page. We use one at the root of the site with an id of faceted_search.
  • A custom add-on distribution, as generated by mr.bob.

If you are familiar with Plone add-ons and the Zope Component Architecture it all boils down to overriding the @@search browser view. We'll see at the end of this post why this is the case.
Let's look at exactly what needs to be done.

Register the override

Create a file called overrides.zcml in your custom add-on. This file should be in the same folder as the main configure.zcml file in your add-on. Here it is:


When this override is registered (i.e., the next time your site is restarted), the result will be that every time a client requests the browser view @@search we are going to execute our code in .search.Search. So let's write that code now.

Implement our custom browser view

We are going to need a crucial piece of information before we start writing our browser view. The information we need is the name of the text widget on our faceted search page.
We can find it with our browser's developer tools. So, load your faceted search page in the browser, then inspect the text input field. This field will have both a name and an id attribute, both of which should have the same value. The value will be a short string, likely consisting of one letter and one digit. In the figure, this value is c4. In your case, it will likely be different.


Now that we have the text field id, let's create a new file called in the same folder as configure.zcml. (Note: in a typical add-on, you would put this code in the browser folder, or anywhere you like, but let's keep this example as simple as possible.)

Here is the code you need, and remember to use the id you just found instead of c4.


As mentioned above, the faceted navigation page in this example has faceted_search as its id.

Save your files and restart the site.


In your browser go to http://yoursite/@@search?SearchableText=hello

(if you are running Plone locally, on port 8080 and your site id is Plone, then use http://localhost:8080/Plone/@@search?SearchableText=hello)

The site should automatically redirect to http://yoursite/faceted_search#c4=hello. Moreover, this should load your faceted search page and the text field should have the word hello in it. If any content on your site contains the word hello, there should also be some search results listed.

How this works

The Default @@search Browser View

We know that @@search is a browser view because of that "double @" prefix, and a quick grep reveals that it is defined in Products/CMFPlone/browser/configure.zcml like this:

default search

The ajax-search view is invoked for live-search, but we are not touching that here. If we look at the Search class in Products/CMFPlone/browser/, we see that it does not have a __call__() method. Thus, it leaves all the rendering to its template as defined in the configure.zcml file above.

What happens when we define our override as described above is that we are circumventing the default template from rendering, allowing our __call__() method to run instead.


Our __call__() method does a self.request.response.redirect(...), which allows us to send all searches to our faceted navigation page.


Of course, we also want to tell our faceted navigation page what to search for when we redirect to it. It turns out that all search forms in Plone (be they the default viewlet that is in the portal header of every page, or the search portlet, etc) submit the text that the user types in the form as a SearchableText query parameter. So this parameter is easy to retrieve from the @@search request before doing the redirect by doing this:

self.request.form.get('SearchableText', None)

Now we want to pass this SearchableText to our faceted navigation page. That's where the c4 field name comes in, which we inspected. Faceted navigation uses URL fragments instead of regular query strings, i.e. it uses the # hashmark to append queries and state to its URL. So we turn this:


into this



The faceted navigation page I created for one specific project has a checkbox widget for the tags used on the site. (We only used a controlled vocabulary of tags, so that normal editors are not allowed to add tags willy-nilly to their content. Therefore, the number of available choices in the widget is relatively small.)

Now, by default, Plone adds "Filed under:" links at the bottom of each content item that allow the visitor to view the results of a search for all content that has the same tags. Also, it adds the same links to each search result.

It is straightforward to use the same technique as described above to redirect these links to the faceted navigation while pre-selecting the right tag in the tags widget.

I leave this as an excercise for the reader.

How to add portlet widgets to eea.facetednavigation

If we want to add some static text anywhere on a Faceted Navigation page, eea.facetednavigation allows us to use portlets as widgets in any of the widget containers. Here is how. (Requires ZMI access).

You may want to use an existing portlet, but if you want to create an ad-hoc portlet, you can do it in the ZMI:

  • Go to portal_skins
  • Go into the custom folder
  • Select Page template from the Add dropdown menu, top right
  • Give it a simple name - I will refer to this name below as your_portlet_id
  • Paste in the following code:


<div metal:define-macro="portlet">
<h1>Foo Bar!</h1>


Now go back to the site:

  • On your faceted navigation page, go to the Faceted Criteria tab
  • In the Top widgets section, click the + button
  • In the Add widget dialog, select the type Plone portlet
  • Give it a friendly title (but this would only be shown to you, not to the end user)
  • Under portlet macro, enter:  your_portlet_id/macros/portlet, (replacing your_portlet_id with the name you chose earlier in portal_skins/custom)


Now you can go back in the ZMI portal_skins/custom, and use any HTML you want.



A Title for the Homepage

Over the years, I have been asked a number of times to "fix" the title in the browser window or tab for the homepage. As it turns out, there is a simple solution to this, and it's better than entering a space in the title field.

It's true -- you learn something new every day!

Plone automatically generates the <title> element of every page by concatenating two strings, separated by an &mdash;:

  1. The value of the Title field of the current page
  2. The value of the Site Title on the /@@site-controlpanel.


Thus, this page for example has the title:  A Title for the Homepage — Soliton Consulting

But what if you want some page to just show the site title?  Typically, you might want this on the homepage.


Simple!  Just give your homepage the same title as the site, and Plone will skip the concatenation business.


I stumbled across this when I went to look at the code, expecting to need to customize it.  It's a viewlet after all (plone.htmlhead.title), so that approach is simple enough.  No customization needed!

An nginx location directive for Plomino

I have been using the Ansible playbook for Plone lately, but I ran into a problem because of its nginx role.  Currently, the nginx role is written to disallow access to any URL path that contains /manage_, which is a good idea to prevent direct access to the ZMI.  It forces you to use an SSH tunnel when you are making any TTW changes in the ZMI.  However, Plomino defines several methods that start with manage_, and they end up getting blocked by nginx with 403 errors.  I wanted to preserve the added safety, while not breaking my Plomino apps, so I defined a nested location directive to do that.

Here is the location directive created by the Ansible playbook nginx role:


location ~ /manage_ {
  deny all;


And here is my modified directive:


location ~ /manage_ {
  deny all;
  location ~ /manage_(deleteDocuments|specificrights|refreshDB|generateView|replications|importation|exportAsXML|importFromXML) {
    allow all;
    rewrite ^/(.*)$ /VirtualHostBase/http/$server_name:80/Plone/VirtualHostRoot/$1 break;
    proxy_pass http://localhost:_your varnish server port here_;

Extracting Google user account properties in Meteor

A simple customization of the simple-todos tutorial app using Google accounts

In Chapter 9 of the Meteor Tutorial you can learn about how to add user accounts and login/out functionality to your sample todo app.  Towards the end the tutorial suggests to the adventurous to add the accounts-facebook package, to enable Facebook login.  I did, verified that I get a Facebook login button, and promptly removed the package (not a Facebook fan here!).  Instead, I added the Google accounts package:

> meteor add accounts-google
added oauth at version 1.1.2
added google at version 1.1.2
added oauth2 at version 1.1.1
added accounts-google at version 1.0.2
added accounts-oauth at version 1.1.2
accounts-google: Login service for Google accounts

When you do, you get a nice Google button in the Sign in menu, but it's all red and says "Configure Google Login".  In other words, a little setup is needed before you can log in with a Google account.  Fortunately, if you click the red button, you get detailed and fairly straightforward instructions for how to do so.  In short order, you should have it all working.

Customizing the {{username}}

The tutorial has us identify each task by the username of the account that created the task with the {{username}} template tag.  This works fine as long as we use simple username/password authentication, but as soon as we replace or augment it with Google accounts, the template tag is replaced with an empty string.

Since this template tag is in the scope of the task template, which is called in the context of an iteration of the results of a Tasks.find(...), the value of {{username}} comes from the expression Meteor.user().username in:

text: text,
createdAt: new Date(),            // current time
owner: Meteor.userId(),           // _id of logged in user
username: Meteor.user().username  // username of logged in user

Now, why does this remain empty?  Let's inspect the users in the Mongo DB.  Start the meteor application in one terminal window, and then open another terminal window and run:

> meteor mongo
MongoDB shell version: 2.4.9
connecting to:

Then run the following query:

meteor:PRIMARY> db.users.find()

If you logged in with a Google account, you will see it listed.  Note how in the whole json structure of this user object there is no username key.  That's why.

However, there are several interesting fields that could be used instead, or for other purposes:  name (the full name), email, given_name, family_name, gender, and picture.  Let's use given_name as the name to show next to each task.  Because of the way the json object for the Google account is nested, this is how we can refer to it:

text: text,
createdAt: new Date(),
owner: Meteor.userId(),
username: Meteor.user()

And now the first name of the user who created the todo will appear next to it!

Learn more

For a lot more customizations, and using GitHub accounts instead of Google, watch this video:


Getting Started with Mobile Meteor

The solution to a couple of problems installing the SDKs required to run Meteor as a mobile app

Today I'm skipping ahead to the Running your app on Android or iOS page of the Meteor tutorial.  The vast bulk of time required to perform these steps is taken up by downloading the various SDKs that are needed.  For this reason, I limited myself to just the Android version, and left the iOS version for another day.  Other than that, a couple of very simple commands are all it takes to get our simple-todo Meteor app to run either in an emulator, or directly on a mobile device.  And you are not limited to your local server, either - your mobile device app can immediately start talking to the remote server deployed on page 6 of the tutorial.  It is quite exhilarating to see your fully functional mobile app launched so quickly!

I encountered a couple of gotchas while running the add-platform android and the run android commands, due to environment variables not being set properly during the installation of the SDKs.  My platform is OS X Yosemite (10.10), and the Java environment I installed is the SE Development Kit 8 (jdk-8u-25).  This page automatically opened up when I ran the meteor install-sdk android command, and it contained the installation instructions.  I was also prompted to install the HAXM emulator acceleration, which I did.

> meteor add-platform android
Error: ERROR : executing command 'ant', make sure you have ant installed and added to your path. 

I located ant in /Users/yourname/.meteor/android_bundle/apache-ant-1.9.4, however setting the ANT_HOME variable was not sufficient.  The JAVA_HOME environment variable also needed fixing:

> ant
Error: JAVA_HOME is not defined correctly.
We cannot execute /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/bin/java

The Solution

Some googling led me to the solution, which is to set the following two environment variables:

export ANT_HOME=/Users/yourname/.meteor/android_bundle/apache-ant-1.9.4
export JAVA_HOME=$(/usr/libexec/java_home)

After this, I could run ant properly:


> ant
Buildfile: build.xml does not exist!
Build failed

This spells success!

With that, all the commands for mobile apps run, and I could enjoy the tutorial app on my Android phone.



A First Look at a Basic Application Created by Meteor

After installing Meteor itself, the Meteor tutorial instructs you to create your first application with the following command:

meteor create simple-todos

The result is an application made up of three files (an html template, a javascript file for the application logic, and an empty css file), plus a folder of "internal Meteor files".

After spending a couple of minutes to see how the javascript file is structured and how it ties into the html template, I got curious about the magic that makes it all work.

The .meteor folder

The first level of the "internal Meteor files" folder looks rather harmless, with ids, lists of packages used, etc.  One hint of the submerged portion of the iceberg is given by the versions file, which lists 52 packages or libraries or whatever these things are called in the javascript world.

Initially, that's all you get from running the meteor create simple-todos command.  However, things get more interesting when you start the application:

> cd simple-todos
> meteor

When meteor starts, another folder is created, called local, which in turn contains two more folders, build and db.  This is where things get interesting.  But before diving in, let's see what the application sends to the client.

The client point of view

If you load the application in the browser as instructed by the tutorial and by Meteor itself at the command prompt, by navigating to http://localhost:3000, you can then inspect the resulting page with Firebug or your browser's developer tools.  The resulting html closely mirrors the application template, but don't be fooled!  Do an old-fashioned "view source" instead, and you'll see something rather different:  your browser actually received an html file with a <body> that is completely empty!  The <head> on the other hand, loads something like 40 different javascript resources, plus a dictionary of application constants.

What this means is that the page's entire DOM gets generated on the client by scripts.  Indeed, at the bottom of the list of the 40 javascript files that are loaded we can see something interesting.  The last one, /simple-todos.js, is the same as the one in our project top-level directory, except that before being sent to the client it got wrapped inside a

(function(){ ... })();

The <script> just before that is even more revealing.  It's called template.simple-todos.js, and contains:

Template.body.addContent((function() {
  var view = this;
  return [ HTML.Raw("<h1>Welcome to Meteor!</h1>\n\n  "), Spacebars.include(view.lookupTemplate("hello")) ];

Template["hello"] = new Template("Template.hello", (function() {
  var view = this;
  return [ HTML.Raw("<button>Click Me</button>\n  "), HTML.P("You've pressed the button ", Blaze.View(function() {
    return Spacebars.mustache(view.lookup("counter"));
  }), " times.") ];


It's easy to see the correspondence with our html template:


<h1>Welcome to Meteor!</h1>
{{> hello}}
<template name="hello">
<button>Click Me</button>
<p>You've pressed the button {{counter}} times.</p>

In other words, our template gets parsed by meteor and compiled into a script, which is sent to the client, and upon execution builds the DOM.

I feel a little uneasy about this.  Granted, a DOM inspector (like Firebug) shows me the rendered html, so it should be debuggable just like in the old days, but what if something goes awry in this whole chain?



The Meteor Install Script

What it does and where it puts things.

The first step in the Meteor tutorial is to install meteor with the following minimalist shell command:

curl | sh

Here is what it does:

  1. The script runs on OS X and Linux only, so first it checks which system you are on, and quits if it's neither.  Then it branches off according to your OS.
  2. There are also come checks for "very old" versions of Meteor (pre-April 2013), with instructions for how to deal with those.
  3. Any previous installation in ~/.meteor gets wiped.
  4. Any leftover temporary install directory in ~/.meteor-install-tmp gets wiped.
  5. It downloads the meteor bootstrap tarball and extracts it to ~/.meteor-install-tmp.
  6. It moves ~/.meteor-install-tmp/.meteor to ~/.meteor.
  7. It finds the symlink ~/.meteor/meteor, and copies the script scripts/admin/launch-meteor in the same directory to /usr/local/bin/meteor (sudo required).
  8. It prints the helpful message:
To get started fast:
$ meteor create ~/my_cool_app
$ cd ~/my_cool_app
$ meteor
Or see the docs at:


This is the happy path, but of course, the installer also deals with various kinds of error conditions.

The version is set in the variable RELEASE in the script, so I suppose if you want to upgrade to a later version you need to download the script and run it again.  I presume the URL in the install command will always point to the latest version.

In a future installment, I will dissect the launch-meteor and the meteor scripts themselves, because they seem to be responsible for downloading all the node and other javascript dependencies.  For the time being, I am trying to achieve some kind of isolation by doing all this inside a nodeenv virtual environment.

Detect Mobile Browsers

A useful regex that can be plugged into about any environment, to detect nearly all major devices known to WURFL

Recently I had a need for a simple way to redirect all requests for a website to a different URL if the request was coming from a mobile device.  That was about the extent of it, no mobile framework was required, no special library or API.  I was happy to find an open source solution:

This very useful solution basically offers a single regular expression that is capable of detecting 15777 devices and 15606 user agent strings (as of this writing), which encompasses nearly all major devices detected by WURFL.  You can download it in 16 different flavors, ranging from Apache to IIS to Nginx rewrite rules, to pretty much any popular web development environment, such as Javascript, Python, Rails, Perl, PHP, ASP, C#, and more.

Android tablets, iPads, Kindle Fires and PlayBooks are not detected by design. To add support for tablets, add |android|ipad|playbook|silk to the first regex.

It is released into the public domain with the Unlicense.


In my case, I opted for the Apache rewrite condition.  In the following example, I added an extra twist:  if a mobile browser requests the same "regular" URL within a 24 hour period, the redirect will only happen on the first request.  So I set a cookie with an expiration time of 1440 minutes.

RewriteCond %{HTTP_COOKIE} !nomobilesplash
RewriteCond %{HTTP_USER_AGENT} (android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge\ |maemo|midp|mmp|netfront|opera\ m(ob|in)i|palm(\ os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows\ (ce|phone)|xda|xiino [NC,OR]
RewriteCond %{HTTP_USER_AGENT} ^(1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a\ wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r\ |s\ )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1\ u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp(\ i|ip)|hs\-c|ht(c(\-|\ |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac(\ |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt(\ |\/)|klon|kpt\ |kwc\-|kyo(c|k)|le(no|xi)|lg(\ g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-|\ |o|v)|zz)|mt(50|p1|v\ )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v\ )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-|\ )|webc|whit|wi(g\ |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-) [NC]
RewriteRule ^/(.*)$ [R,L,]

Plone + Salesforce: The Perfect Pairing

Thanks to a whole array of excellent add-ons for Plone to integrate it with Salesforce, your web content and your constituency database can seamlessly be tied together to simplify your work processes.

If you already have a Plone site and a Salesforce account (or are considering adding one or the other to your IT toolset), it won't be difficult for you to imagine ways in which your organization could become more effective, and its workload eased, if only the two could work together.  A few examples:

  • If the e-newsletter subscription form on your website could directly save to your Salesforce contacts, your staff could do away with the manual data entry, say from a subscription email to a new Salesforce contact.
  • Suppose your Plone site has a custom content type for directory entries for public display, but your "master" directory is maintained in Salesforce.  The two directories could be automatically kept in sync, so any edits to a given record in Salesforce are promptly reflected on the public website.  Or you may want the ability to edit a record on either side, and the synchronization to be bi-directional.  This way, you would only have to make an update once, and not worry about keeping track of having to repeat the same edits on multiple platforms.
  • If your Salesforce data is structured using multidimensional custom categories, you might want the same structure to be reflected on your website.  Out of the box, Plone can handle a single taxonomy with multiple tags for each content item, but you can have custom types with as many categories as are needed.  Managing multiple categorization vocabularies can become a site configuration chore.  By synchronizing your Salesforce data with Plone, you can expose a rich and multi-faceted vista on your valuable content to your audience, without any additional editorial intervention.


Soliton Consulting can support you with these use cases, and more.

To showcase some of our experience in this field, please consider the following client solutions:

Web-to-Lead Forms

The Fund for Global Human Rights recently upgraded their multi-lingual website from Plone 2.1.4 to 4.2.  At the same time, their E-Newsletter signup form was integrated with their Salesforce account using the well-established Salesforce PFG Adapter.  This is a very quick and affordable solution that provides immediate benefits to any organization.

Content Synchronization

Online directories are prime candidates for batched synchronization between a Salesforce database and the content of a Plone site.  Think Local Seattle took advantage of this solution for a streamlined workflow.

Configuration of Complex Data

501 Commons has a sophisticated search functionality for their provider directory.  Each directory entry is tagged for multiple orthogonal categories, such as:  Areas of expertise, Counties served, Experience, Foreign Languages, Communities of color served, Other special populations served, as well as other keywords.  The vocabularies, i.e. the sets of all possible values for each of these categories, are fetched dynamically from Salesforce to build the search options on the navigation page.  Of course, all the provider directory entries themselves are also synchronized directly from Salesforce. Please see the post Diazo for Web Grafting for other aspects of this interesting project.


Please contact us if you would like more information about integration of Plone and Salesforce.

Plone Open Garden 2013

A report out from PLOG, which took place April 3rd - 7th, 2013, in beautiful Sorrento, Italy.

O' sole, o' mare...! Calling attention to the sunshine and the Gulf of Naples, punctuated with quick arm and hand sweeps, and uttered with the appropriate Neapolitan accent, one is happy to let it all sink in, and finally leave behind the damp, grey weariness of another godforsaken Pacific Northwest winter.

There is no better way for plugging into the Plone community than to show up at any one of the many events happening year-round and worldwide

For the seventh year, a contingent of Plone professionals  again converged on the classy Hotel Mediterraneo in Sorrento for the annual Plone Open Garden, and I counted myself among the lucky ones to partake in the five days and four nights of intense, yet relaxed coding, sharing and - most of all - bonding with other members of this extraordinary community.  For some years now I had PLOG on my radar, but this year was my first opportunity to experience it first-hand.  The superb dining and the impeccable style of the hotel's ambiance and of its garden certainly helped, but the overwhelming feeling I got from all fifty-odd participants was one of delight at being reunited one more time and have a chance to spend a few days together doing what we all love to do.  Coming from all corners of Europe (Finland, The Netherlands, the United Kingdom, France, Slovenia, Catalonia, Germany, Spain, and, of course, Italy) and from as far as Brazil, not to mention yours truly from the United States, for many this was the first chance to meet face to face since the October 2012 Plone Conference in Arnhem.  All our electronic communications channels notwithstanding, the Plone community is very much a human community, and humans need personal interactions to reinforce this sense of belonging everybody craves.  Anyone out there wanting to find ways to plug into Plone, or just learn more about it - take note:  there is no better way than to show up at any one of the many events happening year-round and worldwide.  Without aiming to detract from any of them, in my humble estimation, PLOG tops them all.  My heartfelt appreciation goes to the Abstract team who made it all happen, fearlessly led by Maurizio Del Monte.

From the many excellent morning talks in the Speakers' Corner and all the conversations and sprints that happened on this occasion, I got the distinct sense that the energy and momentum behind several strategic directions are significantly increasing: to name a few, the marketing effort and the upcoming and the Products Party.

Personally, with Asko Soukka's help I learned how to integrate (slides), Travis continuous integration tests, and Saucelabs into any given add-on, which is a terrific testing framework, and I integrated robot tests into Plomino.  I enjoyed learning about NixOS, and  I also want to re-share a 2007 paper by Jonah Bossewitch: Fabricating Freedom: Free Software Developers at Work and Play.  Brought to our attention over dinner and tweeted by Silvio Tomatis, Jonah paints a picture of the open source community, and the Plone community in particular, in which many of us will not fail to recognize ourselves.

Please enjoy some pictures I took at PLOG, and now Onwards to Brazil!

Simple and powerful web service access with YQL

The Yahoo! Query Language unifies access to a plethora of web services with a simple SQL-like language. Apps run faster, with fewer lines of code, fewer network calls, and eliminating the pain of locating the right URLs and API documentation to access and query each Web service.

During Eric Brehault's excellent Plomino training at this year's Plone Conference I became acquainted with a new, exciting bag of tricks:  the Yahoo! Query Language.

Exhibit A: The Query

select * from weather.forecast where woeid in (select woeid from geo.places where text="arnhem")

Looks like a SQL statement, you say?  You would be forgiven, but only if you try this:

YQL Request

Notice that the query string is essentially the previous "SQL statement", after URL encoding.

The response to the above HTTP request is JSON data, encoding the weather forecast for Arnhem, Netherlands.  Are the little gears in your brain turning yet?

Exhibit B: The Console

While the link above resulted in raw data, the following link takes you to the YQL console:


Notice the following:

  • The large text box at the top is pre-populated with the previous query
  • Directly under it, click the Test button, and try switching between XML and JSON, as well as between the Formatted or the Tree representations.
  • The two right tabs allow you to experiment with the two individual data sources that are joined by the query.
  • Finally, in the text box along the bottom you can find the REST Query I linked above, which returns the raw data.

The Proof

The little weather icon to the left was generated with a small snippet of JQuery utilizing the same query URL from above.  Note how no javascript API is loaded from remote sources, and we are combining two different web service data sources in a single AJAX call.  Safe and fast.


Piecing it together

Go back to the YQL console, and drill down into the list of Data Tables on the right, until you find weather.forecast.  The large text box at the top will be populated with a sample query:

select * from weather.forecast where woeid=2502265

Here, woeid=2502265 represents Sunnyvale, CA.

Next, go back to the Data Tables list, and click on geo.places.  This time, the sample query is:

select * from geo.places where text="sfo"

Copy the query, and go back to weather.forecast.  Instead of woeid=...., let's use the in operator, and put a pair of brackets around the 2502265 value.  Finally, replace the 2502265 value with the query from geo.places:

select * from weather.forecast where woeid in (select woeid from geo.places where text="arnhem")

That's how easy the console makes it for us to discover how to piece together any web service query we can think of!

Finally, it's just a matter of pulling out the pieces of data from the JSON response with a little bit of JQuery.

Of course, by playing around with the console, or even reading the extensive YQL documentation, we can make the queries much more efficient and optimized, but this is a great start.

If you want to use data from such disparate sources as Zillow, Craigslist, Flickr, Pidgets,, Wordpress, Yelp, Facebook, Twitter, YouTube, Answers, and many others, I can't recommend YQL highly enough.

Diazo for web grafting

An introduction to Diazo, as seen in the integration of the eea.facetednavigation customized for the 501 Commons Resource Directory into the new Washington Nonprofits site.


I recently completed an interesting little project, which provides me with the opportunity to showcase a very useful technology - Diazo.  To start, take a look at the following two websites:

At first glance, these two sites seem to have nothing in common, except for the general topic they seem to present:  They come from different organizations, they look very different, and judging by the respective main navigation menus, the rest of the site has very different content.  However, if you look just a little more closely you will notice that apart from the header and footer of the two sites, the main page body is actually the same.  It works the same in both sites, too:  you can click on checkboxes, expand the various filters in the middle column such as the "Counties served", and the results in the right column are dynamically updated accordingly.  All the filters and search results are the same, too.  (This is an example of a "faceted navigation", which is some interesting functionality in its own right.)

I'll let you in on the secret now:  they are actually the same site.

A bit of history:  Over a year ago I participated in creating the 501 Commons site, with its faceted navigation directory, by customizing the Plone add-on product eea.facetednavigation, where all the filters and search results are dynamically loaded from SalesForce.  Earlier this year, Washington Nonprofits kicked off a project to redesign their old website.  As part of this re-vamping, they negotiated with 501 Commons to have the same resource directory embedded within their new site.  The redesign of the new Washington Nonprofits site was commissioned to a separate company, but I was pulled in to solve this particular embedding problem.  The requirement was that the new site would launch with the 501Commons resource directory seamlessly embedded into one of their pages, while leaving all control over the directory itself in the hands of 501 Commons.

Diazo - plastic surgery without the scalpels

With Diazo, all that was needed for this to happen was the HTML and CSS of the new Washington Nonprofits site design, which was available through the browser at a temporary URL (the new site had not launched yet).  I never needed access to the source code or any implementation detail of the new Washington Nonprofits site, and it never had to be modified or altered.  A subdomain ( merely had to be set up and pointed to the server hosting

Even more remarkably, the implementation on the 501 Commons site did not need to be altered for this to happen, either.  Consider that the two sites are built on completely different platforms, hosted in different environments, and managed by independent organizations.  501 Commons is a Plone site with a SalesForce integration, hosted on Soliton Consulting's servers [now moved to a different hosting provider], while Washington Nonprofits is definitely not Plone, and could literally run on any other platform.

Applying a custom graphics design to a website is a process known in the industry as "skinning", or "theming".  That is, designers produce the desired look and feel of a site, usually manifested in the form of Photoshop composite files.  Then, that design is converted into HTML and CSS code.  The resulting code is then usually applied to the underlying website platform code.  In most Content Management Systems, this requires writing code that is tailored to the very specific implementations of the various functionalities of the site, e.g. menus, search, sidebars, etc.

Diazo makes it possible to "skin" a site without modifying any of the underlying code.  The magic happens in a so-called "rules" file, which is an XML file containing a set of transformation rules.  These rules are then translated into XSLT transforms, which are applied on the fly to the HTML dynamically generated by the server.  The rules act as go-betweens to modify a static HTML theme file, and place the dynamic content into the theme skeleton.  For example, rules can say "drop this element of the theme", or "replace that block of theme with this piece of content", or "insert this piece of content before this block of the theme".  XPath or CSS3 selectors are used in the rules to identify elements in the theme or the content.  The theme skeleton can thus be completely rearranged by the rules.  Of course, the theme refers to the CSS styles, which is where the graphic design takes shape.  Please refer to the last section below for some example rules.

Diazo also includes the ability to selectively apply a theme, depending on the URL used in the request to the server.  And so it is that the same server, indeed the same Plone site, can serve up two apparently completely different sites.  One site is the "original", which is left untouched by Diazo, and the other has the Diazo skin for  (Of course, the former has a skin of its own, but that is a "traditional" skin, deeply ingrained in the code that generates all the site components.)

The reason why I called Diazo a new "technology" at the top of this article, is that it is completely independent of any web framework.  It works on any platform, regardless of whether you use Plone, Drupal, Wordpress, Django, Pyramid, Ruby on Rails, or what have you.  Of course, it is now part of the Plone core, so Plone makes it particularly easy to adopt, but that does not make it specific to Plone.

New prosthetics with Diazo

Medical science has opened up many new possibilities with artificial limbs, organs, skin transplants, etc.  Diazo allows similar advances in web development.  No longer do we have to put all our eggs in one monolithic technology basket.  It is now very easy to just take one site's skin, and graft it onto a different site.  The end result is that the two sites appear to be one and the same, with the capabilities of both.  And why limit ourselves to two?

Every web platform has distinct strengths and weaknesses.  Blogs, shopping carts, custom data-driven web applications, wikis, issue trackers, forums, ... many platforms have tried to incorporate as many different applications into their core or their set of add-on plugins as possible, often with less than stellar results.  It is now easier than ever before to use different solutions and integrate them into one seamless site.

  • Use a WordPress blog inside your Plone site
  • Integrate a Trac issue tracker within a Drupal site
  • Merge a Plone site with a Django application and a separate shopping cart framework


If you have any specific ideas for how Diazo might apply to your situation, please let me know in the comments below!

A few sample rules

The following rule takes the <title> element from the content, and replaces the last 11 characters, i.e. it substitutes "501 Commons" with "Washington Nonprofits" in the title:

<replace content="/html/head/title">
<xsl:variable name="valueLength" select="string-length(//html/head/title/text())-11" />
<xsl:value-of select="substring(//html/head/title/text(),1,$valueLength)"/> Washington Nonprofits

The next example shows how I made links open a new browser tab or window:

<xsl:template match="//a[contains(@class,'internal')]/@class">
        <xsl:attribute name="class">external</xsl:attribute>
        <xsl:attribute name="target">_blank</xsl:attribute>


Find out more

A Python Script for Recursive Search-And-Replace

I wrote a little Python script to solve a find and replace problem:
The problem was that I had a directory tree with several thousand files, about 2000 of which were static HTML (yeah, don't ask....I blame my predecessor for this), with the typical Google Analytics tracking code, e.g.:

<script type="text/javascript">

var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-11111111-1']);

(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);



Now my client started worrying about the privacy of their users, and asked me to remove all these snippets.  By the way, In Europe it will soon become illegal to use Google Analytics without asking visitors whether they allow tracking them.

I wrote a regex that would capture this multi-line snippet easily.
In addition, several HTML files also had event handlers calling the GA tracking script, e.g:

<a href="crime_and_punishment_jp.pdf" onMouseDown="javascript: _gaq.push(['_trackEvent', 'Trans
lations', 'Downloaded PDF', 'Crime and Punishment vol 2 (Japanese)']); ">

The following script works fine, and takes only a few seconds to traverse several thousand files.
Here is my script:  I ran it in python 2.5, hence I didn't have the "with" statement available.

from os import walk
from os.path import join
from re import compile

def handler(error):
print error


modified = []
event = []

for root, dirs, files in walk('myfolder', onerror=handler):
for name in files:
path = join(root, name)
file = open(path, 'r+b')
all =
fixed = script_pattern.sub('', all)
fixed = event_pattern.sub('', fixed)

for i in modified: print i
print len(modified)

for i in event: print i
print len(event)

A way forward for plone.api

As one of the Munich plone.api sprint participants, and following up to a lively discussion on the plone-developers list that brought in the voices of many dedicated Plone contributors, I want to make a modest proposal.

It is significant that so many of the dedicated contributors to the Plone core and its ecosystem as a whole felt compelled to weigh in to the discussion.   Without exception, all the voices in the discussion melded together to form a decidedly constructive chorus.  Clearly, a nerve was struck.  If you were not in Munich, I can attest to the fact that this topic had the ability to galvanize every single person who participated in the discussions, no matter their level of experience in Plone development.

As so often happens in a lively debate, minds produce copious amounts of ideas.  This, of course, is a good thing.  It can be bewildering, too.  We are lucky that all of these ideas were not just voiced in fleeting verbal conversations, but that we have them, black on white (or whatever colors you use), in our inboxes and list archive.  It would certainly be useful to attempt to synthesize all the viewpoints we have heard so far.

My intention, though, is to go back to the start.  It seems to me that there was a point in the Munich open space where the discussion definitely lifted off.  The lift-off happened when someone admitted to not knowing, or not being able to remember, how to write the code to do something that should be simple, such as copying a content object.  Everyone could relate to that frustration.  Everyone.  No doubt, it wasn't just about "copying an object".

I think we should not confuse the momentum behind plone.api with wanting to create a "great" API.  If we let the conversation go in that direction, everyone is going to produce a different wishlist, and there is no way we can make everyone happy.  I'm also not completely on board with the idea of solving 20% of the use cases that cause 80% of the problems.  That sounds too much like a common denominator approach, that could end up making everybody unhappy.

The momentum originates from the possibility that, someday, with this API I might actually be able to write (and remember) the simple method call required to do a very simple thing.  And so, while I'm proud of the sphinx docs we produced, and of our "document first" approach, perhaps to some extent this approach distracts us from where the energy is, and what we are trying to do.

Instead (or in addition) the energy comes from:  "I really hate that to do A I have to use this crappy xyz code!"

So, can we start a collection, a little gallery of horrors?  Here is a silly example of what I mean.  I'm going to paste a code snippet that I hate, and I'm going to explain what I would like to have instead.  After that, people can weigh in on what disadvantages my desired "API" would have, or why it would not work, or how it could be solved better.

My example

tal:define="is_manager python:context.portal_membership.checkPermission('Manage portal', context);"
<a href=""
tal:content="string:Site Setup"
tal:attributes="href string:${context/portal_url}/plone_control_panel;
title string:Site Setup">
Site Setup

Why this sucks

The problem is not TAL, it's the double indirection to a method that I have to call with what looks like a set of positional arguments in an arbitrary order.  Could I please just have a global is_manager that I don't need to define?  If I set up my own set of custom permissions, I guess I'll be fine doing the python:context.portal_membership.checkPermission('Do something unusual', context).  Plone ships with a set of stock permissions, other than manager, so all of those should be available globally.  Actually, it would be nice if a global is_mycustomperm could be generated automatically when a new permission is defined.

Actually, this example contains two horribles in one.  What's with the ${context/portal_url}?  I can never remember when I can use portal_url and when I can't.  Why context?  Why would portal_url depend on it?  Subsites don't ship with Plone out of the box.

What would be better

Can I have this, please?

<a href=""
tal:content="string:Site Setup"
tal:attributes="href string:${portal_url}/plone_control_panel;
title string:Site Setup">
Site Setup

Discussion, pros/cons

Is there a performance penalty to having all the permissions computed for the context at request time?



I don't have a strong preference on how this little gallery of horrors should be implemented.  Sphinx might work.  Google moderator, maybe.  [I'm a fan of wikis, I like how in MediaWiki (e.g. wikipedia) there is a separation between the content and the discussion about the content (they are on different tabs), and yet there is no barrier to either editing the content, or adding to the discussion, and full history is preserved (again with no barrier, no context switch).]

It's great that we started writing the documentation for plone.api, and even included examples for each element of it.  But somehow divorcing this documentation from the horrors we are trying to fix seems counterproductive.

Of course, the "little gallery of horrors" and the "official" documentation have to be integrated somehow, and this is another problem.

Finally, I think that while it's certainly better to start small than not at all, it should be possible to let plone.api grow over time to cover more than the 20/80 scenario that was proposed.

Plone Konferenz: Day 3

The final day of talks and open spaces

Daniel Kraft, D9T GmbH
User Interfaces in JavaScript

This presentation attempted (and in my opinion succeeded in) making the case that an entire web application can be built in Javascript.  At this point, I am not too hard to convince anymore.  As Philipp von Weitershausen demonstrated at the 2011 Plone conference in San Francisco, Javascript is plenty fast, so no concerns there.  Daniel also claimed that the error logging problem can be solved with some tools that send log events back to the server (did not write down names).  Javascript quite naturally allows teamwork with a separation of concerns between people working on the templates, the CSS and the scripts.  Of couse, there is JSON to handle sending data back and forth between client and server.  One thing Daniel did not talk about is the server side, and that's about my only complaint.  He also touched upon compression of HTML, CSS and Javascript.  And he mentioned A/B testing for interface design.

The whole presentation was based on the experience Daniel gained rewriting an e-commerce application in Javascript, but there was no demo or details of the project.


Andrew Mleczko
Building the project management software of your dreams (slides)

The title may seem a bit hyperbolic, but by the time we got to the demo it became clear that it was no exaggeration.  Red Turtle pulled off an amazing feat here.

I can't remember if it was the Emilia Romagna region, or the European Union that partially funded this collaborative effort between Red Turtle and two other local companies.  The premise was:  we use a lot of different tools to fulfill our project management needs, but there isn't a single one that does it all.  So we are going to have to build it.  But why reinvent the wheel?  Just use all the tools we currently use as components of a "mega mashup".

  • Pyramid for main application, good support for third-party authentication thanks to Velruse.  The Pyramid admin UI is the glue that holds everything else together, with one common page frame for Plone, Trac, Google Apps.
  • Plone for SSO, intranet and knowledge management, easy to integrate with Pyramid and Trac
  • Trac for bug tracking, flexible reports, supports WSGI, easy to integrate with Pyramid, using a few plugins
  • Google Apps, oauth, scheduling, document management.
  • Twitter Bootstrap: CSS framework.  This allowed to build a beautiful UI with progressive enhancement out of the box.

It was amazing to see in the demo that through Pyramid all the components could use each other's data.

Future integrations:
redmime, github, dropbox, yammer

Timo Stollenwerk & Sebastian Böttger
TYPO3 vs. Plone - Der Shootout

I have a few misgivings about this one.  For one, the sound volume was so low that I could hardly hear the moderator or the two contenders, and my jetlag-addled mind took that as a cue to seek some sleep whenever it could.  For the other, I had never really heard of TYPO3 before, and I doubt I will in the US, so that too contributed to my interest being fairly sluggish.  On the other hand, the idea was good, and since TYPO3 is a very popular LAMP-based CMS in Germany, everybody else seemed to be really into it.  It might be interesting to do a "shootout" between Plone and Drupal in the US.  TYPO3 seems to have a pretty powerful backend UI, with what looked to me like a Deco-like drag-n-drop tile based layout system.  Timo scored a point and a round of wild applause when he demoed Diazo to instantly "steal" the TYPO3 skin and apply it to an OOTB Plone site.

Keynote von Prof. Udo Helmbrecht
Verminderung von IT-Sicherheitsrisiken (slides coming soon)

Tr.:  Reducing IT security risks

Prof. Helmbrecht is the director of Enisa, the European Network and Information Security Agency.  And Enisa uses Plone.  His talk was pretty interesting, from a perspective of how an agency such as Enisa has to look forward to all kinds of emerging threats.  E.g.:  cloud computing:  governments may not want to put their sensitive data, or the sensitive data of major national industries, in the cloud if there are no guarantees that the data will not get stored in datacenters outside of its borders, especially in a country that could potentially become hostile.  But on the other hand, the economies of scale that make cloud computing possible would break down if such restrictions were to be imposed on it.  In the end, any unrealized risks might at some point in the future become reality, as the botnet and stuxnet cases proved.  Then there is social networking, and the risks involved in mobile apps and HTML5.

He also talked about how most of the advisory reports produced by Enisa are put together by teams of independent experts, and some discussion arose at the end around the question of putting a Plone community member on one of those committees. We could certainly contribute a lot, and so it seems like a good idea.

I feel it was a very smart strategic move on the part of the Konferenz organizers to invite Prof. Helmbrecht to keynote for us.


Lightning Talks This is an online math tutor for 5th and 6th graders, and it's built in Plone! JC Brand preseented this Lorem Ipsum generator, which can be made to create dummy instances of any content type, filled in as many fields as required with Lorem Ipsum filler.


Open Spaces

I joined the open space that picked up where yesterday's left off.

In the spirit of making things easy that should be easy, and after realizing, as a group, that there was no consensus or even clarity on how to duplicate a content item, it was decided to start writing a wrapper API that would allow to accomplish some common tasks with one simple (and easy to remember) method call.

We discussed various approaches.  In the end we decided that what would make things the easiest would be a PHP-like API for about 20 of the most desired tasks, and not to worry that it may not be "pythonic".  Treating objects like python dicts (e.g. a User) would cause significant complications (e.g. a User could be an ACL object, an LDAP user, or a membrane user, each of which has to be treated differently), and we don't want to have to cover all possible cases.  We also thought that our API would be split into two sets:  one set that will simply be the "easy" and "recommended" way to do things from now on.  The other that would only be around as long as the thing it's trying to work aorund is fixed for good, and would subsequently be deprecated.

Here is a sample of python or pseudocode of how we would use the API: (not sure how long this pastebin will stay online).

The new api will be called plone.php.  Not.

Plone Konferenz: Day 2

More great talks, a city tour with a car wreck, the famous party, and some other things.

Slept in, missed first talk.


Robert Rottermann
Web-Mashups mit Plone und Diazo

UAProf : User Agent Profile  :  an XML file generated by the manufacturer of the device, but can contain mistakes that can not be fixed by the community.  Some devices (like iPhone) don't provide UAProf.

Wurfl : Wireless Universal Resource File

  • organizes device capabilities into capability groups
  • since November 2011 no longer open source
  • alternatives exist


Enter Diazo

  • Plone asks client for properties
  • according to client properties, Diazo manipulates URL


Jens W. Klein
Ausfallsichere Kultur mit Plone - Effektives redundantes Hosting mit OpenSource Boardmitteln (slides)

Tr.: "Failsafe culture with Plone - Effective redundant hosting with open source"

Introduction:  the context for this talk is the regional agency for the promotion of arts and culture for lower Austria (Kultur Niederösterreich), and how they deployed Plone for all their sites.

Plone + virtualization +  redundance = thumbs up!



  • KVM



  • OpenAIS
  • Pacemaker
  • Corosync Cluster Engine
  • OSF-Scripts
  • DRBD - Blockdevice (filesystem) replication


web publishing

  • nginx
  • varnish proxy cache
  • pound load balancer
  • Zope instances
  • ZODB, MySQL, Samba


Massimo Azzolini
Scalable Plone: from town-wide sites to regional portals and Intranets (Slides)

Small sites don't need anything but Plone with CacheFu and Apache


Large sites:

  • integration of google search appliance
  • anonymous view for editors:  editors have a way to switch to a view that shows them the site as if they were not logged in
  • redturtle.smartlink
  • rt.purge ?  to purge the varnish cache on-demand, when content managers want a new content to be pushed out at a specific time
  • Newsletter:  Singing & Dancing, add-ons (collective.dancefloor)
  • Tag Cloud:  collective.vaporization
  • Maps:  collective.geo



  • IIS in front of everything
  • three servers with the following stack:
  • apache
  • varnish
  • Pound
  • 4 zeo clients
  • one zeo server
  • ZODB partitioned


"siege" for load testing




  • do you really need it?
  • yes, if you want to create an internal link to content inside another subsite
  • create a collection that takes content from more than a subsite
  • find documents from outside the subsite as well
  • custom theme

redturtle.subsites similar to lineage, but context-sensitive, depending on domain used to access


see blog post



Internos (between us)

  • user dashboard/bookmarks, personal notifications
  • mercatino
  • the expert replies

Standard plone installation

  • custom theme
  • "usual" add-ons
  • 7000 registered users
  • Auth with Active Directory
  • used standard plone dashboard, but one column of dashboard is always shown on the left side of the whole intranet
  • rer.passaparola
  • rer.bookcrossing


Documents, events, news, extreme management, ploneboard


Andreas Jung
Von Plone zum EBook oder PDF - Dokumentieren und Publizieren aus Plone und mit Plone

From Plone to EBook or PDF - Documenting and Publishing from Plone and with Plone

Single-Source Multi-Channel Publishing

This looks like a very powerful system!

Keynote von Matt Hamilton

"…It's Like Buying a Relationship"

I would summarize Matt's keynote thus:  Plone is not the code, it's the community

This one had a quote that Matt took exception to:

This brings us to CMS licensing costs. These can be modest, or they can add up to millions of dollars, depending on which solution you're looking to buy. Your budget can start at $5,000; $20,000; $50,000; $100,000; or $250,000, just for the license. It is still a common misconception that open source WCM is free. You may not pay for the license, but you get what you pay for.

And then further down it has the comment in the title of the keynote, which Matt used sarcastically:  there are only certain types of relationship which you can buy with a wad of cash, and they usually don't last very long...  As Plone developers/implementers/... you can pay us for our services, but our relationships are real.


Lightning Talks

Stefan Antonelli:  time-lapse video of preparations, the day before plonekonf

Daniel Kraft:  Hosting Must-Haves

  • Backups:  regularly, restore tests
  • plonevulntest  (non released)
  • Tested Rollout

Armin Stroß-Radschinski:  Plone in der Nähe von OLAP - Ein Argumentationsansatz (Plone close to OLAP, a few arguments in favor)

Robert Niederreiter: LDAP plugin

Armin Stroß-Radschinski: Plone, Zope, Python brochures


Open Space

I went to the one led by eleddy and Jörg Baach.  Theme:  Things that should be easy but are not.

Later I joked with Jörg that it was like a "Ploners Anonymous" group, with all the steps:  admission that we have a problem, rage, acceptance...  It felt good!

Eleddy captured notes, some of which already ended up on the Angry Plone Developer SMASH google moderator.


City Tour

As a pre-party extra-curricular activity we were invited to participate in a guided tour of Munich.  The title was "The other Munich".  Behind the architecture, the hospitality, the art and the celebrations Munich has a completely different historical dimension, which many are only vaguely aware of.  Before some city government PR geniuses renamed Munich as the "Metropolis with a Heart", it used to be the "Capital of the Movement".  Here is where the NSDAP (aka the Nazi party) was founded, this is where the national socialist movement started its ascent to power.  Munich is also known for many courageous acts of resistance, such as the students' "White Rose" and Georg Elser.  Resistance came from bourgeois and religious circles, as well, and even from the nobility.  Munich is full of buildings, streets and squares that remind us of those times.

I really enjoyed this tour, and want to convey all my appreciation to the organizers!

Oh, and while we were looking at the plaque commemorating the place where the former Gestapo headquarters used to stand, a cab and a van got into a wreck:


It wouldn't be a Plone conference (whether with a C or a K) without a great party.  We had the run of the entire Villa Flora restaurant, with a very civilized buffet-style dinner made up of too many great selections to count, and unlimited beverages.  A DJ, too.  Great fun!


Some other random notes

First of all, one of the participants was a PhD student who is currently doing research on the factors that cause retention or attrition in FLOSS projects.  He had no prior connection to the Plone community, and so he didn't know much about it.  I thought it was really interesting to have an outsider asking us lots of questions, forcing us to think about who we are, why we do what we do, what caused us to maybe change or stay the same, etc.  It's one thing to hear a keynote tell us what a great community we are (which is true), but it's another to be put in a position to articulate it ourselves, trying to be as objective as possible.

Possibly the only snafu of the whole conference was that the wifi in the main auditorium stopped working about halfway through the second day, and never came back up.  I know the organizers were very chagrined about it, but were powerless to fix it.

Plone Konferenz: Day 1

A brief report, my favorite links and keywords after the first day at the Plone Konferenz in München

I have never posted reports from conferences before, so a few words on my intentions:  I don't plan to be exhaustive, nor even coherent.  I just want to share some notes I took at the various talks I attended today.  The very minimum that happens for me when I see an interesting talk is that I will get inspired to find out more about a number of concepts the presenter touched upon.  That's usually the extent of my notes - leads for later exploration.  My notes generally are all in Evernote, perfectly searchable to begin with.  By posting them online I will get the added benefit of having them indexed by google...

The conference schedule

The slides for most of the talks of today are available online already.  Just go to the schedule page, click on the talk you are interested in, and look under the presenter's profile box.  If there is a line called Folien, the link next to it points to the slides.  They will be all in german, of course, with a few exceptions.

The MC in the main lecture hall was Philip Bauer, of  To introduce the conference, some Department Chair or other said a few words, of which I only remember these:  "I always tell my students that the best programs are written not at the keyboard, but while taking a swim or walking in the park."  Philip also thanked him for donating us the venue for free, which is remarkable.

A big kudos for the conference organizers:  by the professionalism and flawlessness of the first day's proceedings one would surmise that these folks are old hands at whipping up conferences of this caliber, and not that this the first ever German Plone Konferenz.  Hats off to you!

The keynote was by Elizabeth Leddy:

Old dogs and new tricks (slides)

Liz's talk was truly excellent.  The gist of it:  she is fed up that coding for Plone is so ridiculously hard, so often.  Example:  it takes 6 files and 20 lines of code just to add a new stylesheet.  Another example:

from Products.CMFCore.utils import getToolByName

# you have some object "context"
portal_url = getToolByName(context, "portal_url")
portal = portal_url.getPortalObject()

You might recognize this as the code you need to grab the site portal.  How silly is this?  Nobody can ever remember it, it's always copied and pasted from somewhere else.

It's not just that Plone developers want their lives to be easier.  It really is about the success of the platform.

"If you want a platform to be successful, you need massive adoption, and that means you need developers to develop for it. The best way to kill a platform is to make it hard for developers to build on it. Most of the time, this happens because platform companies ... don't know that they have a platform (they think it's an application)." ~ Joel Spolsky

Easy things should be easy. They should be so easy that we don't even have to look at documentation, or find code samples to copy and paste from.

The most-tweeted quote from the presentation was "Plone developers cost much more than the competition because they are highly skilled + scarce" (in which the first tweeter mistakenly wrote "scared" instead of "scarce").

I highly recommend the slides (linked above).

Liz ended by announcing that this will be her crusade for the year, and that she would send an email to plone-developers to solicit input on all the things that frustrate us and that should be improved.  She promptly did, and set up a google moderator space to collect input from the community.  As of this writing, there are 19 posts already.

What a great tone to set for this conference!


Harald Frießnegger
Buildout - Alles im Griff! (slides) (alternate)

Tips and tricks for doing useful sysconfigy things with buildout.  For this one I just have a bunch of keywords and names I want to look up later:

bin/checkversions -v versions.cfg | grep was

Plone Software Center

lovely.buildouthttp für Athentifizierung


parts/varnish-build/bin/varnishlog -c -o



.... Memmon


Wolfgang Thomas
Mehrsprachige Sites erstellen und verwalten - Tipps aus der Praxis (slides)

This was about multilingual sites.  Working in the US, I rarely have the chance to use the many ML Plone features and add-ons.  In fact, I vividly remember a sprint in Seattle, in which we tried to do
"the right thing" by declaring all the i18n domains, and stuff, but we it would be immediately obvious to whoever would look at our code that we didn't know what we were doing.  We hoped that at least our effort would be appreciated...

valentine.linguaflow  + XLIFFMarshall



Babel (PyPI)

I liked one feature of slc.linguatools:  your content gets a ML workflow, so that when one item is edited, all the translations of that item get an info box at the top of the page to alert you that those translations are out of date.

Stefania Trabucchi
Mobile Kontexte und Responsive Webdesign (slides)

I think the title is self-explanatory.

"Content first/Navigate secong" von Luke Wroblewski



Roman Jansen-Winkeln
Datendrehscheibe und Download-Plattform: E-Book-Management mit Plone

This was really cool.  The only bummer is that the product described is not released, understandably, since it depends on a whole setup external to Plone.

It's about using Plone as a delivery and reporting platform for e-books.  The product provides two new content types:  an e-book, and an e-book container.  You set all the metadata, upload the e-book.  Then you decide how many random download codes you want to assign to a given e-book.  Now you can print fliers with the URL+code, or stickers, or what not.  It even gives you little snippets of HTML that can be embedded on any other website, which generate a download form:  it asks for email address, name, and whatever other information is required.  The form invokes a view, which in turn wraps up the e-book with whatever DRM "envelope" is associated with the code, and starts downloading it.  Then you can get statistics on how many downloads per book, over different time ranges.

It looks conceptually really simple, and very powerful.


Johannes Raggam
Save The Dates: Das Kalenderframework

All about  Johannes is an expert on event types in Plone.

I want to try this out:


Alan Runyan
Plone Deployment Architecture (slides)

Alan joined us live on Skype from Houston TX.  Biggest takeaway for me is that I really want to move my hosting to relstorage.  Also interesting:  demostorage.  This takes an existing filestorage, network storage (ZEO) or relstorage as it was at startup time, and does all the writes to RAM, i.e. the persistent storage is not touched.



HAProxy can be configured to report on URLs that takes the longest
collective.stats  IN:event.log  OUT:CSV file (also HTTPResponse Headers)

Default Plone Mode
- content in root of Plone
- Maybe content "staged" to a container


collective.stats  is "native" but big PRO/CON
Carrot/Celery as good as Python gets


Lightning Talks

Jan Ulrich Hasecke talked about the german user manual.  It's written with sphinx, so it can produce a nice online version, as well as a great PDF for hard copies.  All the images can be replaced, to customize it to a specific deployment.

Daniel Nouri talked some more about Kotti, a lightweight CMS built on Pyramid.


Runtime rules, static is subordinate,
Don't mess with a framework,
Keep it simple and pythonic,
No fights with storage,
Use chains and trees, as structures.

Jens Klein on YAFOWIL
(Yet another form widget library)

Jens decided he hates form libraries.  z3c.form is insane.  So he wrote his own with extreme simplicity as his goal.  He wants a form to be generated with no python code, or as little as possible.  A form is a data structure (trees and chains), so it can be represented either as a dict, or in YAML.

example: Plone Custom Search on

Sounds interesting.

Getting Ready for Plone Konferenz 2012

In which I explain why this is the first post on this site.

I arrived in Munich from Seattle on Tuesday afternoon, jetlagged and tired, but otherwise unscathed.  The reason why this is the first post on this site is that the Plone Konferenz is the first occasion where I will be giving out my new business cards, and there was no website at the domain printed on them until today.  So I got busy and set up a brand new 4.2b2 site and started pulling content together.  Better rough than nothing!