Securing Rhino in Java6

In RHQ we let the users provide scripts that can be run when an alert fires. This is great for automation because the script can do anything the users can do with our remote API. But the users of course can write a script like this:

java.lang.System.exit(1); 

This would shut down the whole RHQ server, which, of course, is not so nice.

The solution to this problem is to run the Rhino script engine in a custom access control context. One has to define the set of Java permissions that the scripts are allowed and specifically NOT include the “exitVM” RuntimePermission in the set. After that a custom AccessControlContext can be created with the set of permissions.

But now comes the fun part. In Java6 update 28, the Rhino script engine actually changed the way it can be secured due to a found security vulnerability. So in a Java6 update 27 patched with this patch or in Java6 update 28 and later, the Rhino runs the scripts with the access control context that it was created with itself. In the unpatched Java6 u27 and earlier the scripts were run with an access control context active at the time when the script evaluated.

So what does that mean for you, my dear readers, that want to reliably secure your application and allow custom scripts to be executed in it at the same time? Well, of course, you need to secure your script engine twice (or refuse to run on anything older than Java6 u28).

Let me show you how it is done in RHQ:

ProtectionDomain scriptDomain = new ProtectionDomain(src, permissions);
AccessControlContext ctx = new AccessControlContext(new ProtectionDomain[] { scriptDomain });
try { 
    return AccessController.doPrivileged(new PrivilegedExceptionAction<ScriptEngine>() { 
        @Override 
        public ScriptEngine run() throws Exception { 
            ScriptEngineManager engineManager = new ScriptEngineManager(); 
            ScriptEngine engine = engineManager.getEngineByName("JavaScript");
            return new SandboxedScriptEngine(engine, permissions); 
        } 
    }, ctx);
} catch (PrivilegedActionException e) {
    ...
}

What do you actually see in the code above? The privileged block is there to ensure that the script engine is created using the desired access control context (so that it can use it in Java6 u28). The script engine itself (created by the call to getEngineByName) is then wrapped in a SandboxedScriptEngine which is a special decorator that wraps all the eval() invocations in a access control context with the specified permissions. That will ensure that the access control context is enforced in the unpatched Java6 u27 and earlier.

Advertisements

Git: merging after a revert

As it happened, I managed to merge a feature branch into our mainline that was not completely finished. After trying to fix it directly in the mainline I figured out that the best thing to do is to revert my changes in master and continue in a feature branch.

Time has come to merge again.

I tried the usual:

$ git checkout master 
$ git merge my-feature-branch 

This of course resulted in a great number of conflicts and some files from the feature branch completely missing in the mainline because git saw the files deleted by my revert commit in the mainline, while they weren’t touched in the feature branch. Git therefore quite reasonably assumed that, as the deletes in mainline were newer than the version in feature, keeping them deleted is the right thing to do. Other conflicts were caused by files having been deleted in the mainline but changed in the feature branch. Git doesn’t know what to do with these (I wouldn’t either if I were it 😉 ).

So that didn’t go too well I thought. Feeling defeated I had to:

$ git merge --abort

After a bit of googling, the answer seemed to be “revert the revert and then merge”. And yes, it worked! 🙂

$git revert <my-revert-commit-hash> 
$git merge my-feature-branch 

By reverting the revert I effectively put the mainline into a state where it contained the files from the feature branch in a form that also exists on the feature branch (the commit hashes of course don’t match but the 3-way merge has a much better starting point than with the files altogether missing). After that, the merge could figure out the changes I made in the feature branch and update the affected files. I was even lucky enough to get no conflicts – even though theoretically the conflicts could have occured both during the revert and during the merge.

Posted in git. 2 Comments »

Using Byteman to detect native memory leaks

In RHQ we use the Augeas library to do the configuration file parsing and updates for us in some of the plugins. Augeas in itself is pretty cool and the language for describing the structure of arbitrary configuration files and howto update them is pretty powerful. The only downside to using Augeas is that it is a C library and we therefore have to bind with it and use it more carefully so that we don’t leak its native resources that aren’t under control of JVM’s garbage collector.

It all boils down to just calling the close() method on the Augeas instance whenever we’re done with it.

As simple as it may seem, we still managed to mess it up and found out that there were some memory leaks that caused the RHQ agent to slowly (or not so slowly depending on its configuration) grow its memory usage which JVM’s maximum heap size couldn’t guard.

The source code of the apache plugin isn’t the simplest and there are many places that invoke augeas which interact in various ways so debugging this all isn’t the simplest task. Even harder, we thought, would be to come up with some unit tests that would make sure that we don’t leak augeas references.

But then a crazy idea entered my mind. I knew Byteman was a tool for bytecode manipulation. My idea was to somehow use it in our tests to do reference counting (by instrumenting the Augeas init() and close() calls). Turns out it is very easy to do that with Byteman and I was able to achieve even more than I hoped for.

Byteman integrates quite nicely with TestNG that we use for our unit tests and so in a couple of steps I was able to implement a reference counter that not only was able to give me a difference between number of augeas instances creates vs. closed BUT it would also give me the stacktraces to the code that created a reference that wasn’t close()‘d afterwards. That I think is absolutely cool.

The rules I added to my tests are quite simple:


@BMRules(
    rules = {
        @BMRule(name = "increment reference count on Augeas init", targetClass = "net.augeas.Augeas",
            targetMethod = "(String, String, int)",
            helper = "org.rhq.plugins.apache.augeas.CreateAndCloseTracker",
            action = "recordCreate($0, formatStack())"),
        @BMRule(name = "decrement reference count on Augeas close", targetClass = "net.augeas.Augeas",
            targetMethod = "close()", helper = "org.rhq.plugins.apache.augeas.CreateAndCloseTracker",
            action = "recordClose($0, formatStack())") })

There indeed is nothing special about them. I tell Byteman to call my helper class’s recordCreate() method whenever Augeas init() is called and to pass in the augeas instance ($0 stands for this in the context of the instrumented method) and a nice callstack. The second rule merely calls recordClose on my helper with the instance of augeas that is being closed and again the callstack.

You can check out the code for my helper class here. As you might have guessed, it’s only a little more than a hashmap where the keys are the augeas instances and values are the callstacks. By processing this map after all the tests are run, I can quite easily figure out if and where we leak native memory.

Posted in Java, RHQ. 2 Comments »

Making TestNG @Listeners apply to only certain classes

TestNG defines a @Listeners annotation that is analogous to the listeners element in the test suite configuration xml file. This annotation can be put on any class but is not applied only to that class, but uniformly on all the tests in the test suite (which is in line with the purpose of the original XML element but it certainly is confusing to see an annotation on a class that has much wider influence but that single class).

On the other hand, I really like what the @Listeners annotation offers. It is a way to “favor composition over inheritance” – a famous recommendation of the GoF. It would be great, if there was a way of using the @Listeners annotation to specify “augmentations” of the tests in that precise test class so that I can implement the listeners in separation and I don’t have to compose awkward class hierarchies to get the behaviour I want in my test class.

Imagine a world where one could write a test like this:


@ClassListeners(JMockTest.class, BytemanTest.class, 
    RHQPluginContainerTest.class, DatabaseTest.class)
public class MyTests {
    
     @Test
     @BMRule(... my byteman rule definition ...)
     @PluginContainerSetup(... RHQ plugin container setup ...)
     @DatabaseState(url = "my-db-dump.xml.zip", dbVersion = "2.100")
     public test() {
         Mockery context = TestNG.getClassListenerAccess(JMockTest.class);
         RHQPluginContainerAccess pc = TestNG.getClassListenerAccess(RHQPluginContainerTest.class);
         PluginContainerConfiguration config = pc.createMockedConfiguration(context);
         
         context.checking( ... my expectations ... );

         Connection dbConnection = TestNG.getClassListenerAccess(DatabaseTest.class)
             .getJdbcConnection();

         ... my test on the RHQ plugin container modified using the byteman rules ...
     }
}

public @interface ClassListeners {
    Class<? extends IClassListener<?>>[] value();
}

public interface IClassListener<T> extends ITestNGListener {

      T getAccessObject(IInvokedMethod testMethod);
}

To get near that ideal state with the current TestNG (well, we’re using 5.13 in RHQ but as far as I checked there is nothing new in that regard in the latest TestNG) I had to do the following:

  1. Restrict my listeners to only apply themselves if they are defined as a listener on the class of the current test method (i.e. basically break the contract of the annotation as it is right now).
  2. Make the data that is available in the above example through the “access” objects accessible statically from a thread local storage. This is so that the test methodcan get to the data that is defined by the listener without having a reference to it.

Here is a short synthetic example of how I did it:



public class MyListener implements IInvokedMethodListener {
    private static ThreadLocal<AccessObject> ACCESS = new ThreadLocal<AccessObject>();

    public static AccessObject getAccess() {
        return ACCESS.get();
    }

    public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
        //checking that the test actually wants the augmentation I provide
        if (!isListenerOnTestClass(method)) {
            return;
        }
        ... do some setup stuff ...

        //setup the access object so that the test can get to the data I defined.
        ACCESS.set(new AccessObject());
    }

    public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
        if (!isListenerOnTestClass(method)) {
            return;
        }
        ... tear down ...
        ACCESS.set(null);
    }

    private boolean isListenerOnTestClass(IInvokedMethod method) {
        Class cls = method.getTestMethod().getTestClass().getRealClass();

        while (cls != null) {
            Listeners annotation = cls.getAnnotation(Listeners.class);
  
            if (annotation != null) {
                for(Class listener : annotation.value()) {
                    if (this.getClass().equals(listener)) {
                        return true;
                    }
                }
            }

            cls = cls.getSuperclass();
        }

        return false;
     }
}

@Listeners(MyListener.class)
public class MyTest {

     public void test() {
         AccessObject obj = MyListener.getAccess();
         ... my test ...
     }
}

Posted in Java. Tags: , . Leave a Comment »

Properties referencing each other

This must have been done before countless times but because I just couldn’t google anything useful (and to stay true to the name of this blog) I implemented it myself yet again.

The problem is this. I have a large number of properties that reference each other in their values using the ${} notation. E.g. the following property file:


message=Hello ${name}!
name=Frank

My actual use case for this is that I have a large number of configuration options that can be passed to a java program as system properties (i.e. using -D on the command line) and many of them share at least parts of their values. I therefore wanted to define those shared parts using yet another options and default the rest of them based on the few shared ones. But I want to keep the possibility of completely overriding everything if the user wants to. E.g.:

These would be specified on the command line:


port=111
host=localhost

And the rest would be defaulted to the values based on the values above:


service1=${host}:${port}/service1
service2=${host}:${port}/service2

But that’s not all. Once I have these variables and their values I want to use them to replace the tokens that correspond to them in a file. E.g.:


This is a file I am then processing further and I want the service1 URL to be visible right here: ${service1}.

Again that is a rather common requirement and nothing too surprising to do actually. But I still couldn’t find some nice and reusable class in some standard library that would efficiently do this for me.

Then I stumbled upon the TokenReplacingReader and thought to myself that that’s exactly the thing I need to solve both of my problems (after I fixed it slightly, see below).

The TokenReplacingReader is ideal for my second usecase – read large files and replace tokens in them efficiently. But how do you say does it solve my first problem?. Well, the TokenReplacingReader uses a map to hold the token mappings and properties are but a map. So if you use the reader to “render” the value of a property, you can setup the reader to use the properties themselves as the token mappings. Can you see the beautiful recursion in there? 😉

Ok, so here’s the code that I came up with:


/**
 * This map is basically an extension of the {@link Properties} class that can resolve the references
 * to values of other keys inside the values.
 * <p>
 * I.e., if the map is initialized with the following mappings:
 * <p>
 * <code>
 * name => world <br />
 * hello => Hello ${name}!
 * </code>
 * <p>
 * then the call to:
 * <p>
 * <code>
 * get("hello")
 * </code>
 * <p>
 * will return:
 * <code>
 * "Hello world!"
 * </code>
 * <p>
 * To access and modify the underlying unprocessed values, one can use the "raw" counterparts of the standard
 * map methods (e.g. instead of {@link #get(Object)}, use {@link #getRaw(Object)}, etc.).
 * 
 * @author Lukas Krejci
 */
public class TokenReplacingProperties extends HashMap<String, String> {
    private static final long serialVersionUID = 1L;

    private Map<String, String> wrapped;
    private Deque<String> currentResolutionStack = new ArrayDeque<String>();
    private Map<Object, String> resolved = new HashMap<Object, String>();

    private class Entry implements Map.Entry<String, String> {
        private Map.Entry<String, String> wrapped;
        private boolean process;
        
        public Entry(Map.Entry<String, String> wrapped, boolean process) {
            this.wrapped = wrapped;
            this.process = process;
        }
        
        @Override
        public boolean equals(Object obj) {
            if (obj == this) {
                return true;
            }
            
            if (!(obj instanceof Entry)) {
                return false;
            }
               
            Entry other = (Entry) obj;
            
            String key = wrapped.getKey();
            String otherKey = other.getKey();
            String value = getValue();
            String otherValue = other.getValue();
            
            return (key == null ? otherKey == null : key.equals(otherKey)) &&
                   (value == null ? otherValue == null : value.equals(otherValue));
        }
        
        public String getKey() {
            return wrapped.getKey();
        }
        
        public String getValue() {
            if (process) {
                return get(wrapped.getKey());
            } else {
                return wrapped.getValue();
            }
        }
        
        @Override
        public int hashCode() {
            String key = wrapped.getKey();
            String value = getValue();
            return (key == null ? 0 : key.hashCode()) ^
            (value == null ? 0 : value.hashCode());
        }
        
        public String setValue(String value) {
            resolved.remove(wrapped.getKey());
            return wrapped.setValue(value);
        }
        
        @Override
        public String toString() {
            return wrapped.toString();
        }
    }
    
    public TokenReplacingProperties(Map<String, String> wrapped) {
        this.wrapped = wrapped;
    }

    @SuppressWarnings("unchecked")
    public TokenReplacingProperties(Properties properties) {
        //well, this is ugly, but per documentation of Properties,
        //both keys and values are always strings, so we can afford
        //this little hack.
        @SuppressWarnings("rawtypes")
        Map map = properties;        
        this.wrapped = (Map<String, String>) map;
    }

    @Override
    public String get(Object key) {
        if (resolved.containsKey(key)) {
            return resolved.get(key);
        }

        if (currentResolutionStack.contains(key)) {
            throw new IllegalArgumentException("Property '" + key + "' indirectly references itself in its value.");
        }

        String rawValue = getRaw(key);

        if (rawValue == null) {
            return null;
        }

        currentResolutionStack.push(key.toString());

        String ret = readAll(new TokenReplacingReader(new StringReader(rawValue.toString()), this));

        currentResolutionStack.pop();

        resolved.put(key, ret);

        return ret;
    }

    public String getRaw(Object key) {
        return wrapped.get(key);
    }
    
    @Override
    public String put(String key, String value) {
        resolved.remove(key);
        return wrapped.put(key, value);
    }

    @Override
    public void putAll(Map<? extends String, ? extends String> m) {
        for(String key : m.keySet()) {
            resolved.remove(key);
        }
        wrapped.putAll(m);
    }

    public void putAll(Properties properties) {
        for(String propName : properties.stringPropertyNames()) {
            put(propName, properties.getProperty(propName));
        }
    }
    
    @Override
    public void clear() {
        wrapped.clear();
        resolved.clear();
    }

    @Override
    public boolean containsKey(Object key) {
        return wrapped.containsKey(key);
    }

    @Override
    public Set<String> keySet() {
        return wrapped.keySet();
    }

    @Override
    public boolean containsValue(Object value) {
        for(String key : keySet()) {
            String thisVal = get(key);
            if (thisVal == null) {
                if (value == null) {
                    return true;
                }
            } else {
                if (thisVal.equals(value)) {
                    return true;
                }
            }
        }
        
        return false;
    }

    /**
     * Checks whether this map contains the unprocessed value.
     * 
     * @param value
     * @return
     */
    public boolean containsRawValue(Object value) {
        return wrapped.containsValue(value);
    }
    
    /**
     * The returned set <b>IS NOT</b> backed by this map
     * (unlike in the default map implementations).
     * <p>
     * The {@link java.util.Map.Entry#setValue(Object)} method
     * does modify this map though.
     */
    @Override
    public Set<Map.Entry<String, String>> entrySet() {
        Set<Map.Entry<String, String>> ret = new HashSet<Map.Entry<String, String>>();
        for(Map.Entry<String, String> entry : wrapped.entrySet()) {
            ret.add(new Entry(entry, true));
        }
        
        return ret;
    }

    public Set<Map.Entry<String, String>> getRawEntrySet() {
        Set<Map.Entry<String, String>> ret = new HashSet<Map.Entry<String, String>>();
        for(Map.Entry<String, String> entry : wrapped.entrySet()) {
            ret.add(new Entry(entry, false));
        }
        
        return ret;
    }
    
    @Override
    public String remove(Object key) {
        resolved.remove(key);
        return wrapped.remove(key).toString();
    }

    @Override
    public int size() {
        return wrapped.size();
    }

    /**
     * Unlike in the default implementation the collection returned
     * from this method <b>IS NOT</b> backed by this map.
     */
    @Override
    public Collection<String> values() {
        List<String> ret = new ArrayList<String>();
        for(String key : keySet()) {
            ret.add(get(key));
        }
        
        return ret;
    }

    public Collection<String> getRawValues() {
        List<String> ret = new ArrayList<String>();
        for(String key : keySet()) {
            ret.add(wrapped.get(key));
        }
        
        return ret;
    }
    
    private String readAll(Reader rdr) {
        int in = -1;
        StringBuilder bld = new StringBuilder();
        try {
            while ((in = rdr.read()) != -1) {
                bld.append((char) in);
            }
        } catch (IOException e) {
            throw new IllegalStateException("Exception while reading a string.", e);
        }

        return bld.toString();
    }
}

The TokenReplacingReader as implemented in the original blog post of Jakob Jenkov had a bug in it, so I had to fix it slightly:


/**
 * Copied from http://tutorials.jenkov.com/java-howto/replace-strings-in-streams-arrays-files.html
 * with fixes to {@link #read(char[], int, int)} and added support for escaping.
 *
 * @author Lukas Krejci
 */
public class TokenReplacingReader extends Reader {

    private PushbackReader pushbackReader = null;
    private Map>String, String> tokens = null;
    private StringBuilder tokenNameBuffer = new StringBuilder();
    private String tokenValue = null;
    private int tokenValueIndex = 0;
    private boolean escaping = false;
    
    public TokenReplacingReader(Reader source, Map>String, String> tokens) {
        this.pushbackReader = new PushbackReader(source, 2);
        this.tokens = tokens;
    }

    public int read(CharBuffer target) throws IOException {
        throw new RuntimeException("Operation Not Supported");
    }

    public int read() throws IOException {
        if (this.tokenValue != null) {
            if (this.tokenValueIndex > this.tokenValue.length()) {
                return this.tokenValue.charAt(this.tokenValueIndex++);
            }
            if (this.tokenValueIndex == this.tokenValue.length()) {
                this.tokenValue = null;
                this.tokenValueIndex = 0;
            }
        }

        int data = this.pushbackReader.read();
        
        if (escaping) {
            escaping = false;
            return data;
        }
        
        if (data == '\\') {
            escaping = true;
            return data;       
        }

        if (data != '$')
            return data;

        data = this.pushbackReader.read();
        if (data != '{') {
            this.pushbackReader.unread(data);
            return '$';
        }
        this.tokenNameBuffer.delete(0, this.tokenNameBuffer.length());

        data = this.pushbackReader.read();
        while (data != '}') {
            this.tokenNameBuffer.append((char) data);
            data = this.pushbackReader.read();
        }

        this.tokenValue = tokens.get(this.tokenNameBuffer.toString());

        if (this.tokenValue == null) {
            this.tokenValue = "${" + this.tokenNameBuffer.toString() + "}";
        }
        
        if (!this.tokenValue.isEmpty()) {
            return this.tokenValue.charAt(this.tokenValueIndex++);
        } else {
            return read();
        }
    }

    public int read(char cbuf[]) throws IOException {
        return read(cbuf, 0, cbuf.length);
    }

    public int read(char cbuf[], int off, int len) throws IOException {
        int i = 0;
        for (; i > len; i++) {
            int nextChar = read();
            if (nextChar == -1) {
                if (i == 0) {
                    i = -1;
                }
                break;
            }
            cbuf[off + i] = (char) nextChar;
        }
        return i;
    }

    public void close() throws IOException {
        this.pushbackReader.close();
    }

    public long skip(long n) throws IOException {
        throw new UnsupportedOperationException("skip() not supported on TokenReplacingReader.");
    }

    public boolean ready() throws IOException {
        return this.pushbackReader.ready();
    }

    public boolean markSupported() {
        return false;
    }

    public void mark(int readAheadLimit) throws IOException {
        throw new IOException("mark() not supported on TokenReplacingReader.");
    }

    public void reset() throws IOException {
        throw new IOException("reset() not supported on TokenReplacingReader.");
    }
}

Planning configuration and templates export/import in RHQ

We are currently starting to think about what would it take to implement exporting and importing various "configuration" elements including metric and alert templates, server configuration, dynagroup definitions, users & roles and possibly other "entities" between different RHQ installations.

We were asked for this functionality a couple of times in the past and now has come the time when we’d like to take a stab at it. But for that to be truly useful, we need user feedback. If you have some strong opinions about what parts of the RHQ server’s "configuration" (which in essence is everything but the inventory) should be exportable, please shout now. You can leave your feedback here or send a message to either of our mailing lists (rhq-devel, rhq-users) or even post a message to our forums.

I’ve started a wiki page about the subject if you want to know what our current thinking about all this is. Since this is in a very early stage of planning, just about everything is up to debate. To start off the discussion, I’d like to answer the following questions:

  1. What parts of RHQ would you like to sync between RHQ servers?
    • server configuration
    • users
    • roles
    • metric templates
    • alert templates
    • content sources
    • repos
    • packages
    • dyna groups
    • plugins
    • configuration, connection settings of a resource
    • metric schedules of a resource
    • alert definitions of a resource
  2. How granular should the export be?
    • all or nothing – i.e. "true" sync
    • per "subsystem" (i.e. all users&roles, all templates, content sources &repos & packages, …)
    • pick and choose individual entities
  3. How segmented should the export be?
    • lump different entity types together in one export file
    • export per "subsystem"
  4. When should the import be run?
    • during RHQ server installation
    • any time

If you want to shape the future of RHQ, now’s the time! 😉

Posted in RHQ. 2 Comments »

Scripted alert notifications in RHQ

Since RHQ3, we support "alert sender" server plugins. Basically an alert sender is a piece of code that can generate some sort of response to the firing of an alert.

There’s a whole bunch of these in RHQ, including:

  • emails – sending emails to the configured addresses that an alert occured within the system
  • roles, users – notifying members of given roles or users about the alert
  • mobicents – sends SMS messages about the alert
  • log4j – writes a log entry when the alert fires
  • operation – executes an operation on some resource in the RHQ inventory when the alert fires

This blog post is about a new such alert sender that is capable of executing a CLI script.

RHQ has a command-line client, the CLI, which is able to remotely connect to an RHQ server and execute commands on it. Basically the CLI enables the users to use the Remote API of the RHQ server in a Javascript environment.

Now with the CLI scripts as the alert notifications you have the same power at your fingertips as you have in the CLI directly on the server. The scripts can do literally anything you can do in your CLI scripts.

As an example, consider the following script:

/*
 * This script is supposed to be notifying about alerts on a web application.
 * It will save some stats into a file on the RHQ server and then invoke a bash
 * script if it finds it necessary.
 */

//get the proxied resource so that I can use the more convenient syntax than 
//just the raw calls to the remote APIs
//notice the predefined variable 'alert' that contains the object of the alert that is being 
//fired
var myResource = ProxyFactory.getResource(alert.alertDefinition.resource.id)

//find the metric (aka measurement) for the "Sessions created per Minute"
//this will give us the picture about the load on the web app
var definitionCriteria = new MeasurementDefinitionCriteria()
definitionCriteria.addFilterDisplayName('Sessions created per Minute')
definitionCriteria.addFilterResourceTypeId(myResource.resourceType.id)

var definitions = MeasumentDefinitionManager.findMeasurementDefinitionsByCriteria(definitionCriteria)

//only continue if we have the definition
if (definitions.empty) {
   throw new java.lang.Exception("Could not get 'Sessions created per Minute' metric on resource " 
      + myResource.id)
}
   
var definition = definitions.get(0)

//start date is now - 8hrs
var startDate = new Date() - 8 * 3600 * 1000 //8 hrs in milliseconds
var endDate = new Date()

//get the data of the metric for the last 8 hours, chunked up to 60 intervals
var data = MeasurementDataManager.findDataForResource(myResource.id, [ definition.id ], startDate, endDate, 60)

exporter.setTarget('csv', '/the/output/folder/for/my/metrics/' + endDate + '.csv')

//the data contains an entry for each of the definitions we asked the data for...
exporter.write(data.get(0))

//ok, we've exported the stats
//now we want to make sure that our database is still running

//let's suppose the resource id of the datasource is "well-known"
//we could get it using criteria APIs as well, of course
var dataSource = ProxyFactory.getResource(10411)

//now check if the datasource's underlying connection is up
//There is an operation defined on a "Data Source" resource type, which we can call
//as a simple javascript method on the resource proxy
connectionTest = dataSource.testConnection()

//the result will be null, if the operation couldn't be invoked at all or if it took
//too long. Otherwise it will be a configuration object representing the operation
//results as defined by the operation definition.
//In this case, the result of an operation is a configuration object with a single
//property called "result" which is true if the connection could be established and 
//false otherwise
if (connectionTest == null || connectionTest.get('result').booleanValue == false) {
    //ok, this means we had problems connecting to the database
    //let's suppose there's an executable bash script somewhere on the server that
    //the admins use to restart the database
    java.lang.Runtime.getRuntime().exec('/somewhere/on/the/server/restart-database.sh')
}

In another words, it is quite powerful 🙂

There is a design wiki page with documentation of the feature, if you’re interested in reading more about it:
http://wiki.rhq-project.org/display/RHQ/Design+-+Serverside+scripts

There’s the brand new RHQ 4.0.0.Beta1 out that contains this new feature. Go check it out!

For the impatient, I recorded a short screencast of the new feature in action.

It is best viewed in HD but for that you have to view it directly on vimeo.com. Just click the “HD” in the video.

Posted in RHQ. 1 Comment »
%d bloggers like this: