I was fortunate enough to receive a CR-48 from Google last week.  My impressions of it are more or less in line with what MG Siegler over at TechCrunch has reported:  good battery, nice screen, light weight laptop.  The only problem is that I have needed to adjust my workflow when I'm working on my regular computer, a Macbook Pro to facilitate sharing documents between the two machines.  For example, I have migrated away from OmniOutliner to a comparable online version, Workflowy.

The other major change is that my keyboard habits are changing.  With only a weekend usage of ChromeOS I already prefer having the caps lock key replaced with a button to open up new tabs in my browser.

When I got into work this morning I immediately missed my new favorite key.  I decided to try and figure out a way to replace this missing behavior.

A recent article outlines how to turn the caps lock key into a control, option or splat.  One benefit of an Apple approved method is that the keyboard caps lock light disables itself when you change the functionality.  A never ending flickering green light would have been a constant annoyance for me.

This approach isn't enough, I wanted more - to be able to open a new tab in Chrome.  I found the PCKeyboardHack preference pane/kext.  Using the instructions provided I remapped the caps lock key to F14, or keycode 107 on the slim apple keyboard.

Now that it was remapped to a real keyboard code it wasn't difficult to change the new tab shortcut in chrome to respond to F14.

These changes work on the laptop's keyboard in addition to the external keyboard and I'm very pleased with the results.

Nate McMinn has a brief wrap up of the Alfresco developers convention.  I was unable to attend and it sounds like I missed a lot of interesting news.  I was excited to here:

One upcoming project that was discussed at DevCon is putting together a third-party components catalog for Alfresco.  Right now there is nothing like this available.  Alfresco community projects are scattered all over the place.  Some are in Alfresco Forge, some are on Google Code, still others are on developers' blogs (mine included).  I'm sure I'm forgetting a few locations, but you get the idea.  Rolling all of this up in one queryable repository would be a fantastic addition to the Alfresco community.

I wonder how this will be deployed and who will maintain it.  The wiki idea for documentation seems to be barely moving along.

JMX rocks.  When configuring a server it is a boon to developers.  Especially when combined with the Alfresco subsystem architecture.  You can interate on changes to the LDAP sync without having to restart the server.  JMX also gives savvy system administrators a way to manage and monitor what’s going on within the repository.

If you’re still unfamiliar with the basics of JMX, especially within the context of Alfresco, Jarred Ottley over at Alfresco has written a number of excellent tutorials.  I’ve added some additional articles and come up with the list below.

Some links:

With the basics out of the way it is often interesting to create your own MBean that can report custom statistics or expose custom methods.  This tutorial creates a new MBean that shows the number of Asynchronous jobs being run.  Alfresco exports beans using standard Spring practices.  This makes keeps everything well documented.  The list of things to create is small:

  • Context file to register new MBean
  • Annotated Java class

Context File

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd">
<bean id="whySlow" class="com.zia.jmx.WhySlow"/>
<bean id="ziaExporter"
<property name="assembler" ref="assembler"/>
<property name="beans">
<entry key="Zia:name=WhySlow" value-ref="whySlow"/>
<bean id="jmxAttributeSource"
<!-- will create management interface using annotation metadata -->
<bean id="assembler"
<property name="attributeSource" ref="jmxAttributeSource"/>

Java Class

public class WhySlow {
@ManagedAttribute( description = "Asynchronous actions left to run" )
public long getAsyncActions() {
AsynchronousActionExecutionQueueImpl aaeq = ( AsynchronousActionExecutionQueueImpl ) AlfUtil.getSpringBean( "defaultAsynchronousActionExecutionQueue" );
long ret = -1;
try {
Class<?> c = aaeq.getClass();
Field[] props = c.getDeclaredFields();
Field tpeField = c.getDeclaredField( "threadPoolExecutor" );
tpeField.setAccessible( true );
ThreadPoolExecutor tpe = ( ThreadPoolExecutor ) tpeField.get( aaeq );
ret = tpe.getTaskCount() - tpe.getCompletedTaskCount();
} catch( NoSuchFieldException nsfe ) {
} catch( IllegalArgumentException e ) {
} catch( IllegalAccessException e ) {
return ret;

The annotations are important for documentation in the console.  There are some reflection shenanigans that allows access to private fields.  Your implementation will not need much of this code, except for the annotations.

When this code is deployed the WhySlow mbean will appear at the top level, next to Alfresco node.  This is controlled by the key of the map passed into the “beans” property(Zia:name=WhySlow) and is explained in the Spring docs.

Wrap up

A bean created under JMX tends to keep the separation of concerns better than many of the alternatives.  We have created “consoles” in webscripts, but it seems to be difficult to train the sys admins to go to multiple places for configuration.  Once the repository is started it is consistent to point users at JMX for all administration and reporting.



Zia Consulting develops in Eclipse using a setup different than described by Alfresco.  We run an embedded version of Jetty with the Alfresco default WARs.  At runtime we mix in changes using extra classpath entries and web overlays.  This has many benefits: most importantly that we develop functionality without having to restart the repository.  When there is a reason to restart the server we make efforts to fix the problem through programming or configuration.

Share has caches that make the webscripts and surf run faster.  These caches have documentation on how to turn them off but this documentation changes with every release of share or surf.  This is difficult to keep up with.

If you're interested in the documented process I have a list of articles to read:

Old Surf intro - refreshing the Surf page cache.  I don't believe this is needed in the most recent versions of share.

Official wiki documentation - For Alfresco 3.0 ➔ 3.3, before Surf joined the Spring project.

Official developer guide -  The section for debugging didn’t get finished.  Likely because they moved over to Spring.

I’m sure there are more.  Feel free to add them to the comments.

We have given up on following the documentation and developed a custom Web Script that invalidates these caches whenever a source file is changed.  The code for this Web Script is from the service index refresh that is part of “org.springframework.extensions.webscripts.bean.IndexUpdate”.  We paid particular attention to the reset method of “org.springframework.extensions.webscripts.AbstractRuntimeContainer”.  The reset method gets run when you goto http://localhost/share/service/index and click the reset button.  The important code is trivial:

  /* public void reset(){ ...*/
/* } */

Running all four of these resets takes a couple seconds and is impractical to do after every save.  Taking just the script and template reset and putting them in a Web Script keeps it pretty lean.  Running time drops from a 2 - 3 seconds to a few hundred ms.  It is fast enough that the refresh will happen before you can cmd + tab to your web browser and refresh.

We created a refresh Web Script using the above code and then setup an Eclipse builder to run when the source is updated.  To do this you need to create a few things.

  • Refresh Web Script.  It’s best to use “none” authentication.
  • Ant script that will call the new Web Script.  Shown below.
  • In Eclipse turn the directory containing javascript and freemarker templates (the config directory if your projects look like exploded amp files) into a source directory so it triggers the auto build.
  • Create a builder according to this tutorial with one change:
    • In step #11 they recommend NOTcreating an auto build.  Ignore this direction - auto building is really handy for our purposes.  It doesn’t cause performance issues, so turn it on and configure it to run “run-refresh-webscript”.

The ant file that we use:

<?xml version="1.0" encoding="UTF-8"?>
<project name="project" default="run-refresh-webscript">
<target name="run-refresh-webscript">
<get src="http://localhost:8080/share/service/zia/util/refresh"
dest="refresh-results.dat" />


I found this article by Kevin Roast that has some great tips on Share development.  The “mode” selection is of great interest.

Datalists out-of-the-box don't have a whole lot of functionality outside of simply capturing data.  A simple way to help users is to add calculated columns that transform other columns or related data from the repository.  In Alfresco 3.3 this functionality is straightforward and allows creating powerful datalists that even an Excel user could love.

For this demo we will create a part of a project management datalist that takes allows users to enter the following: Estimated to completion, estimated at completion, actuals and budget.  From these inputs we will generate the variance, percent complete and percent expended.


  • Create a node and type form filter
  • Create a couple of new static fields
  • Create a new context file to instantiate the filters
  • Tell Share to display the fields in the forms

A coworker and I have been scheming for a number of months on ways to run migration scripts and develop workflow in Alfresco.  The common thread is that we have a repository that's running and we want run arbitrary code against it.  We have had a number of ideas that have taken us in two different directions.  I'd like to talk about a few of them, outline the issues we've run into and then outline our merged solution.


One approach (the merits depend on the project) to migration is load a bunch of data into the repository and then run some code on the repository to translate the data, extract metadata or whatever.  This can be run with an external suite of tools (e.g. an ETL), but this is often overkill.  The code to accomplish the same thing in Alfresco is often easier than 3rd party tools, involves less tools or languages and can be cheaper.  There are a number of ways to run javascript or java code.

Command Servlet

There is an old (from the age of the documentation) servlet that can run arbitrary javascript that seems to fit the bill except that it will only run code stored as nodes in the repository and only in "insecure" mode, which means no access to native java and you have to develop over cifs/webdav or directly in a wysiwig editor in Alfresco.  None if this is ideal.

Command Servlet

Secure webscripts

As a webscript

Creating a web script gives you access to the real Rhino engine, i.e. "secure mode" or could write the script in Java giving you access to the entire API.  This is kludgy because you're executing your migration script by refreshing a webpage.  Also, the "standard out" is either logging messages or the free marker template.  The last nail in the coffin is that you must run your script in a massive transaction.  This seems to really slow down the system and if you are processing more than 1000 nodes you will get cache error messages.  Turning off the caching in the web script's descriptor file means no more access to the company home.

Workflow development

The slowest part of workflow development is stepping through a long workflow.  To speed up the development process my coworker has developed a series of junit tests that setup the workflow and run it to a particular step that he's working on.  This allows all of the previously developed steps to be skipped over so that he can develop and debug the current step, it also provides a certain level of integration/regression testing.  He also spends a lot of time developing the java parts of workflow from within a breakpoint, allowing the jvm to swap the class files and then stepping through the code.

There are couple issue with this approach.  The first is the startup and memory usage.  The junit test spins up an entire alfresco repository, which is slow and very memory intensive.  Since it is sharing the same alf_data as the real repository it tends to corrupt the lucene indexes and sometimes the entire repository.  He ends up dropping and reloading the entire schema multiple times per day and reindexing the indexes.  Even with these drawbacks the development time improvements, due to faster iterations, are substantial.

Java vm attachment - a new approach

Along with Java 6 came a new feature that I only recently discovered.  You can now attach to a running virtual machine and run code loaded from a jar file and pushed into the running virtual machine.  Conceptually I can write a program (using a Java main), bundle it up in a jar, load it onto a server with a running Alfresco repository and then execute the code which will run in the same virtual machine and therefore have access to the applicationContext and any managed beans.  It works like JMX, for developers.

If attaching to the VM is all you're interested in feel free to stop here and read through this tutorial of the process.  If you're interest in a more Alfresco specific, or at least a more web container specific implementation then stay with me.

I took what I learned in that tutorial and created a program and agent main function.  Both are bundled into a jar file.  The jar has a manifest that makes it executable and takes a classpath directory that is added to the running container and a Runnable class to be run in a new thread from within the container.  The relevant code in the program class is:

String classPathToRunnable = args[0];
String runnableClassname = args[1];
String agentArgs = classPathToRunnable + "," + runnableClassname;
List<VirtualMachineDescriptor> listAfter = null;
try {
listAfter = VirtualMachine.list();
boolean connected = false;
for (VirtualMachineDescriptor vmd : listAfter) {
if (vmd.displayName().contains("Main")) {
vm = VirtualMachine.attach(vmd);
vm.loadAgent(agentJarPath, agentArgs);
connected = true;
if (!connected) {
System.err.println("Couldn't connect, never found the server.");
} catch (Exception e) {

ClassPathToRunnable is a directory, e.g. "bin", when run in a typical java environment.  The runnable classname is a fully qualified classname, such as "com.ziaconsulting.MyRunnable".  The classpath directory is going to be added to the classpath of the container and then we'll use reflection to call the class that is passed in.  The way the attach process works is to run on a jar file that lists a class to run in the manifest file.  I have a simple ant file that manages the build and the manifest file:

<project name="Attach" default="jar">
<target name="jar">
<jar destfile="agent.jar">
<fileset dir="./bin">
<include name="**"/>
<attribute name="Main-Class"
value="com.ziaconsulting.Attach" />
<attribute name="Agent-Class"
value="com.ziaconsulting.Agent" />

Pay special attention to the "Agent-Class" manifest attributes.

The next step is to work with the agent main class.  This proved to be a very tricky part because of the complexity inherit with java containers, specifically the number of classloaders.  I developed code specific to Jetty's container which will probably not work with tomcat.  Though the ideas should be the same:

public static void agentmain(String agentArgs, Instrumentation inst) {
// Execute the class that was passed in
try {
// Split the arguments - should be a classpath and a class
String[] args = agentArgs.split(",");
if (args.length != 2) {
throw new IllegalArgumentException(
"Not enought arguments passed");
// Begin the jetty specific stuff, we are trying to get the
// classloader from the Alfresco web app
Server s = Main.server;
WebAppDeployer wap = s.getBeans(WebAppDeployer.class).get(0);
HandlerCollection h = wap.getContexts();
Handler[] handlers = h.getHandlers();
for (Handler handler : handlers) {
if (handler.toString().contains("alfresco")) {
if (handler instanceof WebAppContext) {
WebAppContext wac = (WebAppContext) handler;
WebAppClassLoader cl = (WebAppClassLoader) wac
// Check and see if this classpath is already added, if
// not then add it
boolean classPathAlreadyExists = false;
for (String entry : System.getProperty(
System.getProperty("path.separator"))) {
if (entry.equals(args[0])) {
classPathAlreadyExists = true;
}if (!classPathAlreadyExists) {
// Run the passed in class
final Class runnableClass = wac.getClassLoader()
final Thread thread = new Thread(
(Runnable) runnableClass.newInstance());
} catch (Exception e) {

This may seem like beating around the bush a little, my goal was for all the agent/attaching code to be abstracted so that all users have to do is create their own runnable class that knows nothing about how to get into the VM.  This is demonstrated in the simplicity of my runnable class:

public class MyRunnable implements Runnable {
public void run() {
try {
ApplicationContext ac = AlfUtil.applicationContext;
} catch (Exception e) {

There isn't much more to show.  I haven't had time to expand this to anything more than a proof of concept, but even at this stage it's pretty compelling.

The output of this code is directed to the stdout of the container e.g. catalina.out or alfresco.log.



Our development environment runs off an exploded a war file in an embedded version of jetty, all running in Eclipse.  We typically build the war from source pulled from Alfresco Enterprise SVN.  Knowing how to do this means we can checkout from any of the development branches and get a feeling for how different features are progressing without waiting for the official build to be distributed.  It also give me a warm fuzzy to know how the beast is built.

Up until 3.4b it was an easy process to run "ant incremental" within the enterpriseprojects directory and then unzip the wars that were produced.  The build was changed in 3.4 so that this process now creates a Circular Dependency and fails to run immediately.

I posted this as a regression bug to JIRA and got a response today from Paul Holmes-Higgin to try ant -Dbuild.script=enterpriseprojects/build.xml -f continuous.xml distribute.  This still causes a failure unless you setup extra some folder structure to support building all of the distribution packages (see this discussion) but at least you get the war file in the assemble directory, which is good enough for me.

One thing he didn't mention is that you need to add version numbers as arguments to ant so that some of the jars that are created have valid filenames.

I settled on this:

$ ant -Dbuild.script=enterpriseprojects/build.xml
-Dversion.major=0 -Dversion.minor=0 -Dversion.revision=0
-f continuous.xml distribute

Again, this build will fail unless you setup the folder structure described in the above discussion.

Check for war files:


$ ls -l build/assemble/web-server/webapps/
total 239080
-rw-r--r--  1 dhopkins  staff    97M Oct 13 07:52 alfresco.war
-rw-r--r--  1 dhopkins  staff    20M Oct 13 07:52 share.war


Which leads eventually to...

Screen shot 2010-10-11 at 10.34.21 AM.png

How sweet it is!

There seems to be many times that the Alfresco repository needs to be modified when an AMP is first installed e.g. migration of current nodes.  While Alfresco provides a data bootstrap mechanism it appears it is inadequate for anything except the most basic tasks.  This post was predicated on finding a use of the patch service in the WCM installation script.

I haven’t seen many docs on the proper or improper use of the patch service.  At first blush it is used as a Repository internal tool to update the repository during major releases and hotfixes. As I’ve mentioned, the inspiration for this came from the WCM "installation" during which you insert the wcm-bootstrap-context.xml into the repositories classpath and then start it up.  In that file is a new bean that uses the patch service to create a couple of folders.

Screen shot 2010-09-02 at 10.09.53 AM.png
Installing customizations using AMPs into a new Alfresco repository is pretty easy, there isn’t much room for configuration overwriting other configuration.  In older systems that have been installed by a client or in systems that have multiple SI’s customizations installed it maybe the case, especially in Share, where configuration from one amp overwrites the configuration of another amp.  You can get into difficult to detect problems depending on the order in which amps are applied to Alfresco (probably alphanumeric, last one wins).

Copyright © 2015 boulder dan
Distributed By Blogger Template | Created By Dicky Bust