Friday, January 14, 2011

Use Apache as a secure (reverse) proxy for JBoss 5 AS/EAP

This task can be divided into two independent components (configure Apache to use SSL, set up Apache as a reverse proxy for JBoss) and a single step to make those two work together. The guidelines below have been successfully tested on an Apache 2.2.17/JBoss EAP 5.1.0.GA combination, the latter using Tomcat native libs, on a single server.


Part 1: Use SSL for access to Apache

1) Download and install the Apache Httpd server (version 2.2.6 or higher, 2.2.17 is the current). The folder in which the server is installed is referred to as APACHE_HOME further on.

2) In APACHE_HOME/conf/httpd.conf, un-comment the following lines:
    LoadModule ssl_module modules/mod_ssl.so

    Include conf/extra/httpd-ssl.conf
Then comment the following one (to restrict access without SSL):
    #Listen 80
3) Put your certificate and key in the APACHE_HOME/conf folder, and (if necessary) change the names in APACHE_HOME/conf/extra/httpd-ssl.conf entries to match:
    SSLCertificateFile "[APACHE_HOME]/conf/server.crt"
    SSLCertificateKeyFile "[APACHE_HOME]/conf/server.key"
If you don’t have a CA certificate, you can create a self-signed certificate for testing purposes, see e.g. the OpenSSL FAQ how to do so. An OpenSSL executable is provided in the APACHE_HOME/bin folder.

Remark: On Windows platforms it is not possible to use the SSLPassPhraseDialog-parameter (in httpd-ssl.conf) with the default value ‘builtin’. The simplest (albeit not the safest) solution is to remove the passphrase from the key, removing the need for Apache to ask for it at startup.

4) (Re-)start the Apache server and test whether it works as expected… and don’t forget the ‘https://’!

For an extensive explanation of the SSL configuration possibilities see Apache Module ssl_mod.


Part 2: Set Apache up as a reverse proxy for JBoss

1) Download the mod_jk connector (version 1.2.15 or higher, 1.2.31 is the current), rename the ‘mod_jk-1.2.[*]-httpd-2.2.x.so’ file to ‘mod_jk.so’ and move it to the APACHE_HOME/modules folder.

2) Add the following line to APACHE_HOME/conf/httpd.conf:
    Include conf/mod-jk.conf
3) Create a new file in APACHE_HOME/conf with the name ‘mod-jk.conf’, and fill it with:
    LoadModule jk_module modules/mod_jk.so

    JkWorkersFile conf/workers.properties

    JkLogFile logs/mod_jk.log
    JkLogLevel info
    JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"
    JkRequestLogFormat "%w %V %T"

    JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories

    # Mount your applications
    ###JkMount /application/* loadbalancer
    # Mount all URLs:
    JkMount /* node1

    # You can addionally use external file for mount points.
    ###JkMountFile conf/uriworkermap.properties
    # Mount file reload check interval in secs (0 = turned off).
    ###JkMountFileReload 60

    # Add shared memory. Used only on unix platforms. The shm file is used by balancer and status workers.
    ###JkShmFile run/jk.shm

    # Add jkstatus for managing runtime data:
    <Location /jkstatus/>
        JkMount status
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
    </Location>
When not all requests are to be redirected to node1 the line starting with ‘JkMount’ must be adjusted. Furthermore it is possible to use a separate properties file (using ‘JkMountFile’, with entries following the pattern ‘URL=worker’, e.g. ‘/jmx-console=node1’) if you need a more extensive redirection scheme.

In the configuration above the access to the status manager (worker with ID ‘status’) is restricted to clients running on the same host, just for illustrative purposes.

See the Tomcat connector reference for further details and possibilities.

4) Create a new file in APACHE_HOME/conf named  ‘workers.properties’,  and put the following in it:
    # Define list of workers that will be used for mapping requests

    # Define Node1
    # modify the host as your host IP or DNS name.
    worker.node1.type=ajp13
    worker.node1.host=localhost
    worker.node1.port=8009
    worker.node1.ping_mode=A
    #worker.node1.connection_pool_size=10 # Only if the number of allowed connections to the Httpd is higher than maxThreads in JBoss server.xml.
    #worker.node1.lbfactor=1 # Only used for a member worker of a load balancer.
    # For non-loadbalanced setup with a single node:
    worker.list=node1

    # Define Node2
    # modify the host as your host IP or DNS name.
    #worker.node2.type=ajp13
    #worker.node2.host= node2.mydomain.com
    #worker.node2.port=8009
    #worker.node2.ping_mode=A
    #worker.node2.connection_pool_size=10
    #worker.node2.lbfactor=1

    # Load-balancing behaviour
    #worker.loadbalancer.type=lb
    #worker.loadbalancer.balance_workers=node1,node2
    #worker.loadbalancer.sticky_session=Off # Enabled by default.
    #worker.list=loadbalancer

    # Status worker for managing load balancer
    worker.status.type=status
    worker.list=status
Most lines above are commented out, since we’re aiming for a configuration for a single node without loadbalancing. It is straightforward to add more nodes, with or without loadbalancing; just pay attention to the fact that with loadbalancing the worker.list should not refer to the separate nodes but only to the loadbalancer worker.

5) For each (JBoss-)node a ‘jvmRoute’ attribute  must be added to the <Engine>-element in JBOSS_HOME/server/[configuration]/deploy/jbossweb.sar/server.xml, using the corresponding name from the mod_jk-configuration as a parameter:
    <Engine name="jboss.web" defaulthost="localhost" jvmroute="node1">
And for JBoss AS/EAP version 5 and above that is all that is required!

6) If you didn't configure Apache to use SSL, you can now (re-)start the JBoss en Apache servers and test whether the redirecting functions as expected…
If you did configure SSL for Apache, hang on just a bit more...


Part 3: Combining the twee solutions above

To be able to access Apache using SSL after which the request is passed to the JBoss instance over AJP, one last adjustment is required:

1) Move the JkMount directives from the APACHE_HOME/conf/mod-jk.conf file to the APACHE_HOME/conf/extra/httpd-ssl.conf file, and make sure they’re within the <VirtualHost> tags:
    <VirtualHost _default_:443>

    […]

    JkMount /* node1
    <Location /jkstatus/>
        JkMount status
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
    </Location>

    </VirtualHost>
After a restart of the Apache server the pages served by JBoss will be available over HTTPS from Apache (port 443).

Be aware that they are also still available over HTTP from JBoss directly (on port 8080), since the configuration above didn’t remove that (default) situation. To accomplish that, you should comment the HTTP connector entry in the server.xml file of jbossweb.sar.

Wednesday, December 29, 2010

No, labels in Java are not 'evil'... at least not per se!

Last week I got into an argument with some colleagues about the use of labels in Java for escaping nested loops. The general consensus was some along the line of "using break or continue with a label is evil, because it is a goto". While I feel the construct should be applied with care and many instances in which it could be applied a refactoring into e.g. a call to a separate method makes sense, it certainly has its use and cannot simply be deemed evil.

I can think of a couple of sources for the misconception:
  • When Dijkstra published his letter in Communications of the ACM "Go To Statement Considered Harmful" way back in 1968 this led to a lot of controversy too, and somehow only the title of the letter has stuck with a lot of people - but not its original contents nor its true intent.
  • Java has the reserved keyword goto, but doesn't allow its use. James Gosling outlawed it, so it must be bad - nevertheless he did put in the labels.
So I feel that not getting the whole picture is responsible for such misinformedness in some (or maybe even: many) programmers. But hey, don't just take my word for it!

In his book 'Thinking in Java' Bruce Eckel explains that "In Dijkstra’s “goto considered harmful” paper, what he specifically objected to was the labels, not the goto. He observed that the number of bugs seems to increase with the number of labels in a program. Labels and gotos make programs difficult to analyze statically, since it introduces cycles in the program execution graph. Note that Java labels don’t suffer from this problem, since they are constrained in their placement and can’t be used to transfer control in an ad hoc manner. It’s also interesting to note that this is a case where a language feature is made more useful by restricting the power of the statement."

In a (lengthy and by now five-year-old) retrospective of Dijkstra's paper, David Tribble illustrates that goto-like constructs are business-as-usual in modern programming languages without it being apparent al the time, and that constructs mentioned by Dijkstra include not only labels for exiting loops, but also e.g. exception handling (try-catch-finally blocks).
Furthermore he also reaches the conclusion that "Dijkstra's belief that unstructured goto statements are detrimental to good programming is still true. A properly designed language should provide flow control constructs that are powerful enough to deal with almost any programming problem. By the same token, programmers who must use languages that do not provide sufficiently flexible flow control statements should exercise restraint when using unstructured alternatives. This is the Tao of goto: knowing when to use it for good and when not to use it for evil."

Dustin Marx puts it nicely when he says "The more I work in the software development industry, the more convinced I become that there are few absolutes in software development and that extremist positions will almost always be wrong at one point or another. I generally shy away from use of goto or goto-like code, but there are times when it is the best code for the job. Although Java does not have direct goto support, it provides goto-like support that meets most of my relatively infrequent needs for such support."


Now I'm not saying that the above is the conclusive evidence that proves my point. But you may interpret it as an incentive to be a little more openminded when it comes to certain 'conventional wisdoms' surrounding programming...


Update: If you take a look e.g. at this nice article on Java bytecode, specifically the bit about exception handling, you can see what's happening under the hood. That's right, those are just plain vanilla gotos at work when you use a try-catch block!

Wednesday, November 17, 2010

Uploading jBPM .par files (from a repository)

Once you've passed the testing cycles during development, you want to make sure that the processes that get deployed onto the production environment are indeed versions that were released according to your formal build procedure - if you have such in place of course.

In our case, that means that the officially released processes are available from a Maven repository. Now there's nothing wrong with retrieving a newly released process archive and using e.g. the jBPM console to upload it. That is, if there's just one such .par file to upload.

My current project produces no less than 16 process archives, one of which is referenced in multiple locations - so it is not uncommon to have more than 20 process instances started during the course of a single request we're processing.

Now regardless of the question whether we chose the right granularity for our processes (which I think we did, of course), this turned into quite some work for each deployment cycle, keeping track of which .par file was deployed and whether it was in the correct sequence (we're not using late binding for sub-processes). Performing this task had become too error-prone to allow it for the production environment.

Ant to the rescue?

The user guide states that there are three ways to deploy the process archives (they forget about the jBPM console altogether there):
  • The process designer tool; an Eclipse plug-in that is part of JBoss Tools. This is of course not a real option, since we want to be able to deploy process archives without having to start up an IDE.
  • The org.jbpm.ant.DeployProcessTask; an Ant task available from the regular jBPM jar file. While an Ant build actually is a good option for a command-line alternative, this particular task is simply too much: it starts up a complete jBPM context for uploading the process directly to the database, and as such requires all of the applicable configuration. I prefer to have as little direct database access from external hosts as possible (e.g. for security considerations), and this approach doesn't accomplish that.
  • Programmatically; using the jBPM API directly. That is basically just more complex than using the Ant task, so that's not the way to go either (in this case).
So unfortunately these suggestions don't give us the ease-of-use that the jBPM console did, just selecting the .par file and clicking the 'Deploy' button, and we had to search a little further.

Reuse the input method of the designer

A closer look at the GPD designer plug-in shows that its upload functionality is little more than an HTTP client, calling the POST method of the ProcessUploadServlet of the jBPM console. This servlet then uses the functionality of the jBPM API (as mentioned above for the Ant task and the programmatic approach). This entrance into the jBPM deployment is exactly what we need: it's simple in just requiring the .par file to be entered, any database interaction is taken care of by the servlet, any security issues can be addressed by the deployment of the console (see e.g. how that's done in the SOA platform).

So, using Apache's HttpClient library I finally came up with something like the following:
package org.jbpm.par;

import java.io.InputStream;
import java.net.URL;

import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.mime.MultipartEntity;
import org.apache.http.entity.mime.content.ContentBody;
import org.apache.http.entity.mime.content.InputStreamBody;
import org.apache.http.impl.client.DefaultHttpClient;

public class ProcessUploader {
    public static void main(String[] args) {
        HttpClient client = null;
        try {
            // Get the input parms: first the file name, then the URL String for its location (in the repo).
            String fileName = args[0];
            URL url = new URL(args[1]);

            // Prepare the request.
            HttpPost request = new HttpPost("http://localhost:8080/jbpm-console/upload");
            ContentBody body = new InputStreamBody(url.openStream(), "application/x-zip-compressed", fileName);
            MultipartEntity entity = new MultipartEntity();
            entity.addPart("bin", body);
            request.setEntity(entity);

            // Execute the request.
            client = new DefaultHttpClient();
            HttpResponse response = client.execute(request);

            // You can examine the the response further by looking at its contents:
            InputStream is = response.getEntity().getContent(); // And e.g. print it to screen...
        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (client != null) {
                // Clean up after yourself.
                client.getConnectionManager().shutdown();
            }
        }
    }
}
While this simple example takes a single URL for a .par file (along with the corresponding file name) on the command line, we'll be using the same principle with a standard properties file listing all of the URLs for our process archives and looping through that list executing a request for each file. And these URLs will be pointing to our Maven repository, of course, allowing us to configure the correct versions for each release.

Note that the URL for the upload servlet is hard-coded in the example; if you're uploading your .par files from a different host, you'd want to configure the host on which jBPM runs differently than 'localhost', of course.

Friday, July 30, 2010

Adding task nodes dynamically at runtime

I find that sometimes there's a good reason not to include all possible paths in a process definition at design time. Some of the more generic, non-functional paths can be included dynamically at runtime, in order not to clutter the process definition and be able to focus on the 'real' functionality your process needs to automate.

This goes e.g. for the handling of exceptions, as described here, where an automatic retry is accomplished by adding a transition dynamically from a node in which an exception occurs to itself. You probably don't want to add such 'self-transitions' at design time (that's just butt-ugly).

When you add such a path, it may include a TaskNode at some point. It did for me, and this is how I solved that.

The following code needs to run inside a jBPM context (obviously):

private void createDynamicTaskNode(ProcessInstance procInst, Node originatingNode, Node targetNode) {
        // Add the dynamic task node.
        // - Create the task.
        Task task = new Task("Dynamic task name");
        task.setProcessDefinition(procInst.getProcessDefinition());
        procInst.getTaskMgmtInstance().getTaskMgmtDefinition().addTask(task);
        task.setPooledActorsExpression("Dynamic task executors"); // Or use an actor ID.
        // - Create the node.
        TaskNode taskNode = new TaskNode("Dynamic task node name");
        taskNode.addTask(task); // Adds both ends of the association TaskNode <-> Task.
        procInst.getProcessDefinition().addNode(taskNode); // Adds both ends of the association ProcessDefinition <-> Node.

        // Create transition between originating node and dynamic task node.
        Transition transition = new Transition("Transition to dynamic task node");
        originatingNode.addLeavingTransition(transition);
        taskNode.addArrivingTransition(transition);
        // Create transition between dynamic task node and target node.
        transition = new Transition();
        taskNode.addLeavingTransition(transition);
        targetNode.addArrivingTransition(transition);
    }

Basically it follows the same scenario for creating the node and transitions as jBPM does when it parses the JPDL process definition, using a lot of the defaults involved (such as that the task is blocking and ending it will signal the process instance to continue).
If you needs any of the non-standard options, you may want to read the manual to see what these options can bring you.

Friday, April 9, 2010

Automatic continuations in a jBPM Node

The normal pattern of using a Node would be to execute some Java code from within an Action directly attached to the Node, but as the documentation states, that means this code will also be responsible for continuing the process execution:

"The nodetype node expects one subelement action. The action is executed when the execution arrives in the node. The code you write in the actionhandler can do anything you want but it is also responsible for propagating the execution."

And you may have a different opinion, but I think it's quite tedious to have to repeat the same kind of boiler plate code in each and every ActionHandler implementation used in Nodes, so I wanted to come up with a way to do it more generically.

This standard node type actually gives you a choice as it comes to its execution:
  • as stated above you add an Action (directly to the Node) and have it execute that, or
  • you don't add an Action and have it leave through the default Transition.
How the latter would be useful is beyond me (but that's another discussion), you should be aware of this behavior when you're e.g. attaching Actions to the 'node-enter' event only and not directly to the Node. You'd have a hard time figuring out what happens if you would expect it would be possible to leave the Node through another than the default Transition (it's possible, but you'd have to change the order of the Transitions on runtime, which you probably want to stay away from as far as possible).

The simple approach I chose to illustrate this was to provide an abstract base class, which implements the ActionHandler interface, which has to be extended by all action handlers in your code base. Well, nearly all, but I'll get back to that later. Surely there would be other approaches that will come up with the same result (like annotations or aspects); just knock yourself out.
Such a base class would look something like this:

public abstract class AbstractActionHandler implements ActionHandler {
    protected String transitionName;

    public final void execute(ExecutionContext ctx) throws Exception {
        performAction(ctx);

        if (ctx.getEvent() == null) {
            // When leaving the node we can either have a transition set to be taken or else take the default transition.
            if (StringUtils.isBlank(transitionName)) {
                ctx.getNode().leave(ctx);
            } else {
                ctx.getNode().leave(ctx, transitionName);
            }
        }
    }

    // To be implemented by concrete subclasses; execute the intended Java code and optionally set the transition to be taken.
    public abstract void performAction(ExecutionContext ctx) throws Exception;
}

Now the main 'trick' here is to know when to continue the execution and when not to; as you can tell from the code above this can be derived from the fact whether an event is available in the execution context. At the basis of this fact is the knowledge at which places in a process definition Actions can be added (and when/how they're executed in those instances), and cross-reference that with the required point of continuation (an Action directly in a Node).

You can add an Action at six different places:
  • Directly to a Node: which is what we're talking about here for having automatic continuation.
  • In an event (e.g. 'node-enter'): most of the time that's an explicit event in the process definition.
  • In a Transition: actually then it's executed from within a 'transition' event.
  • In a timer: here it's executed after the 'timer' event is fired, so not within it.
  • In an exception handler: executed from within GraphElement's raiseException(...) method, which also has no event associated with it, but does put the current Exception in the execution context.
  • Directly to the process definition (highest level): these are just for reference from within other elements, so not an 'extra' type in any sense - so we'll just forget about this one for now.
So for the second and third entries of this list the execution context has a current event; the first, fourth and the fifth don't. So the 'trick' as it is used in the above code fragment works for the first three, for the other two (timers and exception handlers) you shouldn't use that particular base class.
It is however possible to extend the 'trick' for these other two instances, by checking the execution context for the availability of a timer (ctx.getTimer() == null) and/or the availability of an exception (ctx.getException() == null) respectively - it depends for which of the cases you want to provide a base class (or mechanism of your choice) in order to have these automatic continuations I was after.

Credit where credit is due: thanks to Arnoud W. for the hint!

Tuesday, November 24, 2009

Running jBPM 3.2.8_SOA on JBoss EAP 4.3

For my current project, I've been putting together JBoss EAP 4.3 and jBPM 3.2.8_SOA. At my company, we don't have the full JBoss SOA stack, yet do have support contracts for the two separately. Needless to say, it was unlikely that they would behave nicely together out-of-the-box...

The supported jBPM distribution comes as a zip file, lacking the installer that the community version does have. But once unzipped, there's a deploy/ directory with everything that needs to be copied onto the app server. So at a first glance it seemed straightforward enough to copy the data/ and deploy/ directories from the deploy/server/default/ folder of the unzipped distribution to the appropriate server base folder.

Wrong reference to JMS class

However, upon first starting up the server, it was indicated in the server.log file that an expected class for the MQ service MBeans (org.jboss.mq.server.jmx.Queue) could not be found. While I know this class was included in the libs distributed along with the 4.0.5 version of the app server, it is no longer found in the EAP 4.3 version and replaced since then. You'll need to replace these entries in the jbpm-mq-service.xml file (found in the deploy/jbpm directory):

    jboss.mq:service=DestinationManager



      jboss.mq:service=DestinationManager

with these:

    jboss.messaging:service=ServerPeer
    jboss.messaging:service=PostOffice



    jboss.messaging:service=ServerPeer
    jboss.messaging:service=PostOffice

and with those in place, the server starts up without any error messages in the log.

Unable to log onto the jBPM console

So the next step was trying to upload a process archive, but that plan was nipped in the bud by the login procedure of the jBPM console. Using a username-password combo that is in the default database entries (like the infamous admin/admin combo) I was denied access. The error logging threw me off at first in this case, as it complained about not being able to find the appropriate roles.properties and users.properties (e.g. like the ones provided for the JMX console). But adding these (in the deploy/jbpm/jsf-console.war/WEB-INF/classes/ directory) simply left me with a 403 Access Denied page, and no logging whatsoever!

The right answer was found in the jboss-web.xml (for the console). There the JAAS security domain is defined as "java:/jaas/soa", while in the jboss-service.xml (in the deploy/jbpm/jbpm-service.sar/META-INF/ directory) the name of the application is still "jbpm-console" - even though both are in the same distribution!

Changing

   ...

to

   ...

in the latter file does the trick, although it is just as fine when you would change the security domain in the former to "java:/jaas/jbpm-console", as long as the two are in sync.


Now with these two minor issues out of the way, I was able to deploy a simple process and run it. Not too bad for two distributions that weren't designed to work together. Possibly there are still some issues left to be solved, which I then undoubtedly will run into during the course of this project. If so, I'll simply dedicate another post about it...

Monday, December 15, 2008

Writing a custom ClassLoader for jBPM

Within our jBPM process engine we're dealing with dependencies on libraries that are, well..., not completely stable. The code in the node handlers is calling our SOA layer through generated API classes, which automagically take care of several boiler plate tasks (such as security) and are deployed as jars along with our process engine. The SOA layer evolves, so in time a number of versions has come to exist.

We encountered a problem because we're running several process definitions within one engine deployment. This includes both entirely different processes as well as new versions of already running process definitions. Our base of automated business processes has grown over time, with older process implementations relying on the early SOA API and the newer process implementations taking advantage of the later API additions. There are dependencies to different versions of the API - and while strict backwards compatibility might have solved this issue for us, in practice this proved not quite feasible.

So what were the issues we were trying to solve?
  • There are different versions of the generated API classes corresponding to different versions of the SOA services. One deployment of jBPM must be able to run processes that rely on different versions next to each other.
  • We wanted to be able to configure the dependency per process definition, but also for versions of a definition, so that a new incarnation of a process may take advantage of a new (and hopefully improved) version of a web service.
  • Not only the correct version of the API classes needs to be used, also the corresponding web service endpoints has to be available to the code running a process instance.
The configuration had to be external to the process archive, so it can be adjusted at deploy time. We've settled for a simple XML format, which allows for all required information to be present using minimum complexity:

   
   
      
      
      
      
   
   
      
      
      
      
   
   
      
      
      
      
   
   
      
      
      
      
   

This custom configuration consists of the following:
  • One line indicating the directory in which all the API jars are deployed. Take care that this directory and the jars in it are not on the standard classpath, because then you're gonna be stuck with only one version, which is not compatible with all of the calling code.
  • At least one entry for each process definition. A single entry can be used for each separately deployed version of the definition (as for process2) or different entries for the different version ranges (indicated using the min_version and/or max_version attributes).
  • For each process definition (version range) the jar file and endpoint for each required web service is added. The name is used for querying by the client code.
The way to include the correct jars to the classpath of a given process instance (running a certain process definition version) is through a custom class loader - a mechanism that is made available in jBPM version 3.3.0.GA - just set the 'jbpm.classloader' property in jbpm.cfg.xml to 'custom' and indicate the custom class loader by setting its name in the 'jbpm.classloader.classname' property. This custom class loader itself is almost too simple to mention: it extends from java.net.URLClassloader, and in the constructor it determines the name and version of the process definition before reading the applicable jar file names (as URLs) from the custom configuration file.

We've put the actual reading from the XML file in a utility class; for reading from XML we could have gone completely overboard and set up a schema and compiled Java classes from it with JAXB. Instead we simply used the dom4j library and a couple of simple XPath expressions to accomplish the same.

Our utility class has the following interface:
public final class ConfigurationUtil {
   public static URL[] getJarsForProcessDefinition(String processId, int version) throws IOException {...}
   public static String getEndpointForProcessDefinition(
      String processId, int version, String serviceName) throws IOException {...}
}
The first method delivers everything needed by the custom class loader's super class constructor. The second method reuses the XML parsing facility and allows the last requirement mentioned in the issues above to be satisfied efficiently.

In all, writing a custom ClassLoader was not much of a task anymore once we figured out what kind of custom configuration was applicable to our situation...