Job failure when stderr written by tool
By default (and for backwards compatibility), Galaxy's job runner will put jobs returning with anything written to stderr into an error state. As of August 2012, each tool can specify means to examine stdout, stderr, and exit codes from a tool and determine if the tool has failed. The documentation for that functionality is under the Tool Config Syntax - <stdio>, <regex>, and <exit_code> tag sets. section.
Prior to this new functionality, in order to avoid warning or progress messages on stderr being treated by Galaxy as an error, a wrapper script was normally used (e.g. shell or python script - whatever you prefer).
1. Shell script
Assaf Gordon contributed shell code in tools/bigbedwig/discard_stderr_wrapper.sh
#!/bin/sh # STDERR wrapper - discards STDERR if command execution was OK. # # This script executes a given command line, # while saving the STDERR in a temporary file. # # When the command is completed, it checks to see if the exit code was zero. # if so - the command is assumed to have succeeded - the STDERR file is discarded. # if not - the command is assumed to have failed, and the STDERR file is dumped to the real STDERR # # # Use this wrapper for tools which insist on writting stuff to STDERR even if they succeeded - # which throws galaxy off balance. # # # Copyright 2009 (C) by Assaf Gordon # This file is distributed under the BSD license. TMPFILE=$(mktemp) || exit 1 "$@" 2> $TMPFILE EXITCODE=$? # Exitcode != 0 ? if [ "$EXITCODE" -ne "0" ]; then cat $TMPFILE >&2 fi rm $TMPFILE exit $EXITCODE
2. Python script
The python equivalent of Assaf's shell script calls the command inside a python wrapper. For the user to see everything written to stderr (and stdout while we're at it) in their history, you need to ask Galaxy to make a new output dataset. This is done by adding another output in the xml tool descriptor file.
Your wrapper will need to redirect all tool output to that file - Galaxy will pass the name for your wrapper.
For example, in this is code from rgHaploView.py the destinations for stderr and stdout are to a new history output - a logfile (eg $myLogFile in a command line - define a new tool output of type text called myLogFile) - you end up with one more output but at least you can see the stderr and stdout for sure....
lfname = sys.argv[x] lf = open(lf,'w') .... vcl = [mogrify, '-resize 800x400!', '*.PNG'] p=subprocess.Popen(' '.join(vcl),shell=True,cwd=outfpath,stderr=lf,stdout=lf) retval = p.wait() s = '## executing %s returned %d\n' % (' '.join(vcl),retval) lf.write(s) .... lf.close()
Or simply use stderr_wrapper.py:
#!/usr/bin/env python """ Wrapper that execute a program and its arguments but reports standard error messages only if the program exit status was not 0 Example: ./stderr_wrapper.py myprog arg1 -f arg2 """ import sys, subprocess assert sys.version_info[:2] >= ( 2, 4 ) def stop_err( msg ): sys.stderr.write( "%s\n" % msg ) sys.exit() def __main__(): # Get command-line arguments args = sys.argv # Remove name of calling program, i.e. ./stderr_wrapper.py args.pop(0) # If there are no arguments left, we're done if len(args) == 0: return # If one needs to silence stdout #args.append( ">" ) #args.append( "/dev/null" ) cmdline = " ".join(args) try: # Run program proc = subprocess.Popen( args=cmdline, shell=True, stderr=subprocess.PIPE ) returncode = proc.wait() # Capture stderr, allowing for case where it's very large stderr = '' buffsize = 1048576 try: while True: stderr += proc.stderr.read( buffsize ) if not stderr or len( stderr ) % buffsize != 0: break except OverflowError: pass # Running Grinder failed: write error message to stderr if returncode != 0: raise Exception, stderr except Exception, e: # Running Grinder failed: write error message to stderr stop_err( 'Error:\n' + str( e ) ) if __name__ == "__main__": __main__()
Your <command> section should then be modified from something like <command> myprog arg1 -f arg2 </command> to <command> stderr_wrapper.py myprog arg1 -f arg2 </command>