Provided by: bioperl-run_1.7.3-3_all 

NAME
panalysis.PLS - An example/tutorial script how to access analysis tools
SYNOPSIS
# run an analysis with your sequence in a local file
./panalysis.PLS -n 'edit.seqret'-w -r \
sequence_direct_data=@/home/testdata/my.seq
See more examples in the text below.
DESCRIPTION
A client showing how to use "Bio::Tools::Run::Analysis" module, a module for executing and controlling
local or remote analysis tools. It also calls methods from the "Bio::Tools::Run::AnalysisFactory"
module, a module providing lists of available analyses.
Primarily, this client is meant as an example how to use analysis modules, and also to test them.
However, because it has a lot of options in order to cover as many methods as possible, it can be also
used as a fully functional command-line client for accessing various analysis tools.
Defining location and access method
"panalysis.PLS" is independent on the access method to the remote analyses (the analyses running on a
different machines). The method used to communicate with the analyses is defined by the "-A" option, with
the default value soap. The other possible values (not yet supported, but coming soon) are corba and
local.
Each access method may have different meaning for parameter "-l" defining a location of services giving
access to the analysis tools. For example, the soap access expects a URL of a Web Service in the "-l"
option, while the corba access may find here a stringified Interoperable Object Reference (IOR).
A default location for the soap access is "http://www.ebi.ac.uk/soaplab/services" which represents
services running at European Bioinformatics Institute on top of over hundred EMBOSS analyses (and on top
of few others).
Available analyses
"panalysis.PLS" can show a list of available analyses (from the given location using given access
method). The "-L" option shows all analyses, the "-c" option lists all available categories (a category
is a group of analyses with similar functionality or processing similar type of data), and finally the
"-C" option shows only analyses available within the given category.
Note, that all these functions are provided by module "Bio::Tools::Run::AnalysisFactory" (respectively,
by one of its access-dependent sub-classes). The module has also a factory method "create_analysis" which
is not used by this script.
Service
A "service" is a higher level of abstraction of an analysis tool. It understands a well defined interface
(module "Bio::AnalysisI", a fact which allows this script to be independent on the access protocol to
various services.
The service name must be given by the "-n" option. This option can be omitted only if you invoked just
the "factory" methods (described above).
Each service (representing an analysis tool, a program, or an application) has its description, available
by using options "-a" (analysis name, type, etc.), "-i", "-I" (specification of analysis input data, most
important are their names), and "-o", "-O" (result names and their types). The option "-d" gives the most
detailed description in the XML format.
The service description is nice but the most important is to use the service for invoking an underlying
analysis tool. For each invocation, the service creates a "job" and feeds it with input data. There are
three stages: (a) create a job, (b) run the job, and (c) wait for its completion. Correspondingly. there
are three options: the "-b" which just creates (builds) a job, the "-x" which creates a job and executes
it, and finally "-w" which creates a job, runs it and blocks the client until the job is finished. Always
only one of these options is used (so it does not make sense to use more of them, the "panalysis.PLS"
priorities them in the order "-x", "-w", and "-b").
All of these options take input data from the command-line (see next section about it) and all of them
return (internally) an object representing a job. There are many methods (options) dealing with the job
objects (see one after next section about them).
Last note in this section: the "-b" option is actually optional - a job is created even without this
option when there are some input data found on the command-line. You have to use it, however, if you do
not pass any data to an analysis tool (an example would be the famous "Classic::HelloWorld" service).
Input data
Input data are given as name/value pairs, put on the command-line with equal sign between name and value.
If the value part starts with an un-escaped character "@", it is used as a local file name and the
"panalysis.PLS" reads the file and uses its contents instead. Examples:
panalysis.PLS -n edit.seqret -w -r
sequence_direct_data='tatatctcccc' osformat=embl
panalysis.PLS ...
sequence_direct_data=@/my/data/my.seq
The names of input data come from the "input specification" that can be shown by the "-i" or "-I"
options. The input specification (when using option "-I") shows also - for some inputs - a list of
allowed values. The specification, however, does not tell what input data are mutually exclusive, or
what other constrains apply. If there is a conflict, an error message is produced later (before the job
starts).
Input data are used when any of the options "-b", "-x", or "-w" is present, but option "-j" is not
present (see next section about this job option).
Job
Each service (defined by a name given in the "-n" option) can be executed one or more times, with the
same, but usually with different input data. Each execution creates a job object. Actually, the job is
created even before execution (remember that option "-b" builds a job but does not execute it yet).
Any job, executed or not, is persistent and can be used again later from another invocation of the
"panalysis.PLS" script. Unless you explicitly destroy the job using option "-z".
A job created by options "-b", "-x" and "-w" (and by input data) can be accessed in the same
"panalysis.PLS" invocation using various job-related options, the most important are "-r" and "-R" for
retrieving results from the finished job.
However, you can also re-create a job created by a previous invocation. Assuming that you know the job ID
(the "panalysis.PLS" prints it always on the standard error when a new job is created), use option "-j"
to re-create the job.
Example:
./panalysis.PLS -n 'edit.seqret'
sequence_direct_data=@/home/testdata/my.seq
It prints:
JOB ID: edit.seqret/bb494b:ef55e47c99:-8000
Next invocation (asking to run the job, to wait for its completion and to show job status) can be:
./panalysis.PLS -n 'edit.seqret'
-j edit.seqret/bb494b:ef55e47c99:-800
-w -s
And again later another invocation can ask for results:
./panalysis.PLS -n 'edit.seqret'
-j edit.seqret/bb494b:ef55e47c99:-800
-r
Here is a list of all job options (except for results, they are in the next section):
Job execution and termination
There are the same options "-x" and "-w" for executing a job and for executing it and waiting for its
completion, as they were described above. But now, the options act on a job given by the "-j" option,
now they do not use any input data from the command-line (the input data had to be used when the job
was created).
Additionally, there is a "-k" option to kill a running job.
Job characteristics
Other options tell about the job status ("-s", about the job execution times ("-t" and "-T", and
about the last available event what happened with the job ("-e"). Note that the event notification is
not yet fully implemented, so this option will change in the future to reflect more notification
capabilities.
Results
Of course, the most important on the analysis tools are their results. The results are named (in the
similar way as the input data) and they can be retrieved all in one go using option "-r" (so you do not
need to know their names actually), or by specifying (all or some) result names using the "-R" option.
If a result does not exist (either not yet, or the name is wrong) an undef value is returned (no error
message produced).
Some results are better to save directly into files instead to show them in the terminal window (this
applies to the binary results, mostly containing images). The "panalysis.PLS" helps to deal with binary
results by saving them automatically to local files (actually it is the module
"Bio::Tools::Run::Analysis" and its submodules who do help with the binary data).
So why not to use a traditional shell re-direction to a file? There are two reasons. First, a job can
produce more than one result, so they would be mixed together. But mainly, because each result can
consist of several parts whose number is not known in advance and which cannot be mixed together in one
file. Again, this is typical for the binary data returning images - an invocation can produce many
images.
The "-r" option retrieves all available results and treat them as described by the '?' format below.
The "-R" option has a comma-separated list of result names, each of the names can be either a simple name
(as specified by the "result specification" obtainable using the "-o" or "-O" options), or a equal-sign-
separated name/format construct suggesting what to do with the result. The possibilities are:
result-name
It prints given result on the standard output.
result-name=filename
It saves the given result into given file.
result-name=@
It saves the given result into a file whose name is automatically invented, and it guarantees that
the same name will not be used in the next invocation.
result=name=@template
It saves the given result into a file whose name is given by the "template". The template can contain
several strings which are substituted before using it as the filename:
Any '*'
Will be replaced by a unique number
$ANALYSIS or ${ANALYSIS}
Will be replaced by the current analysis name
$RESULT or ${RESULT}
Will be replaced by the current result name
How to tell what to do with results? Each result name
Additionally, a template can be given as an environment variable "RESULT_FILENAME_TEMPLATE". Such
variable is used for any result having in its format a simple "?" or "@" character.
result-name=?
It first decides whether the given result is binary or not. Then, the binary results are saved into
local files whose names are automatically invented, the other results are sent to the standard
output.
result-name=?template
The same as above but the filenames for binary files are deduced from the given template (using the
same rules as described above).
Examples:
-r
-R report
-R report,outseq
-R Graphics_in_PNG=@
-R Graphics_in_PNG=@$ANALYSIS-*-$RESULT
Note that the result formatting will be enriched in the future by using existing data type parsers in
bioperl.
FEEDBACK
Mailing Lists
User feedback is an integral part of the evolution of this and other Bioperl modules. Send your comments
and suggestions preferably to the Bioperl mailing list. Your participation is much appreciated.
bioperl-l@bioperl.org - General discussion
http://bioperl.org/wiki/Mailing_lists - About the mailing lists
Reporting Bugs
Report bugs to the Bioperl bug tracking system to help us keep track of the bugs and their resolution.
Bug reports can be submitted via the web:
http://redmine.open-bio.org/projects/bioperl/
AUTHOR
Martin Senger (martin.senger@gmail.com)
COPYRIGHT
Copyright (c) 2003, Martin Senger and EMBL-EBI. All Rights Reserved.
This script is free software; you can redistribute it and/or modify it under the same terms as Perl
itself.
DISCLAIMER
This software is provided "as is" without warranty of any kind.
BUGS AND LIMITATIONS
None known at the time of writing this.
perl v5.30.0 2020-01-07 BP_PANALYSIS(1p)