1 Using the Virtual Analysis Facility
2 ===================================
7 The Virtual Analysis Facility can be easily used by having installed on
8 your client the following software:
12 - [PROOF on Demand](http:
14 - The VAF client *(see below)*:
a convenience tool that sets up the
15 environment
for your experiment
's software both on your client and
16 on the PROOF worker nodes
18 > If you are the end user, you'll probably might
skip the part that
19 > concerns how to configure the VAF client: your system administrator
20 > has probably and conveniently set it up
for you.
22 The Virtual Analysis Facility client
23 ------------------------------------
25 The Virtual Analysis Facility client takes care of setting the
26 environment
for the end user required by your software
's experiment. The
27 environment will both be set on the client and on each PROOF node.
29 Technically it is a Bash shell script which provides shortcuts for PROOF
30 on Demand commands and ensures local and remote environment consistency:
31 by executing it you enter a new clean environment where all your
32 software dependencies have already been set up.
34 Local and remote environment configuration is split into a series of
35 files, which give the possibility to:
37 - have a system-wide, sysadmin-provided experiment configuration
39 - execute user actions either *before* or *after* the execution of the
40 system-wide script (for instance, choosing the preferred version of
41 the experiment's software)
43 - transfer
a custom user **payload** on each PROOF worker (
for instance,
44 user
's client-generated Grid credentials to make PROOF workers
45 capable of accessing a remote authenticated storage)
47 Configuration files are searched for in two different locations:
49 - a system-wide directory: `<client_install_dir>/etc`
51 - user's home directory: `~/.vaf`
53 >
A system-wide configuration file always has precedence over user
's
54 > configuration. It is thus possible for the sysadmin to enforce a
55 > policy where some scripts cannot ever be overridden.
57 Thanks to this separation, users can maintain an uncluttered directory
58 with very simple configuration files that contain only what really needs
59 or is allowed to be customized: for instance, user might specify a single line
60 containing the needed ROOT version, while all the technicalities to set
61 up the environment are taken care of inside system-installed scripts,
62 leaving the user's configuration directory clean and uncluttered.
64 ### Local environment configuration
66 All the local environment files are loaded at the time of the
67 client
's startup following a certain order
75 - `$VafConf_LocalPodLocation/PoD_env.sh`
81 The `common.*` files are sourced both for the local and the remote
82 environment. This might be convenient to avoid repeating the same
83 configuration in different places.
85 Each file is looked for first in the system-wide directory and then in
86 the user's directory. If
a configuration file does not exist, it is
89 The `$VafConf_LocalPodLocation/PoD_env.sh` environment script, provided
90 with each PROOF on Demand installation, *must exist*: without
this file,
91 the VAF client won
't start.
93 ### List of VAF-specific variables
95 There are some special variables that need to be set in one of the above
98 `$VafConf_LocalPodLocation`
99 : Full path to the PoD installation on the client.
101 > The `$VafConf_LocalPodLocation` variable must be set before the
102 > `PoD_env.sh` script gets sourced, so set it either in
103 > `common.before`, `local.before` or `local.conf`. Since PoD is
104 > usually system-wide installed, its location is normally
105 > system-wide set in either the `local.conf` file by the system
108 `$VafConf_RemotePodLocation`
109 : Full path to the PoD installation on the VAF master node.
111 *Note: this variable should be set in the configuration files for
112 the local environment despite it refers to a software present on the
115 `$VafConf_PodRms` *(optional)*
116 : Name of the Resource Management System used for submitting PoD jobs.
117 Run `pod-submit -l` to see the possible values.
119 If not set, defaults to `condor`.
121 `$VafConf_PodQueue` *(optional)*
122 : Queue name where to submit PoD jobs.
124 If no queue has been given, the default one configured on your RMS
127 ### Remote environment configuration
129 All the PoD commands sent to the VAF master will live in the environment
130 loaded via using the following scripts.
132 Similarly to the local environment, configuration is split in different files
133 to allow for a system-wide configuration, which has precedence over
134 user's configuration in the home directory. If
a script cannot be found,
135 it will be silently skipped.
137 - `<output_of_payload>`
149 For an explanation on how to pass extra
data to the workers safely
150 through the payload, see below.
152 ### Payload: sending local files to the remote nodes
154 In many cases it is necessary to send some local
data to the remote
155 workers: it is very common,
for instance, to distribute
a local Grid
156 authentication proxy on the remote workers to let them authenticate to
157 access
a data storage.
159 The `payload` file must be an executable generating some
output that
160 will be prepended to the remote environment preparation. Differently
161 than the other environment scripts, it is not executed: instead, it is
162 first
run, then *the
output it produces will be executed*.
164 Let
's see a practical example to better understand how it works. We need
165 to send our Grid proxy to the master node.
167 This is our `payload` executable script:
171 echo "echo '`cat /tmp/x509up_u$UID | base64 | tr -d
'\r\n'`
'" \
172 "| base64 -d > /tmp/x509up_u\$UID"
175 This script will be executed locally, providing another "script line" as
179 echo 'VGhpcyBpcyB0aGUgZmFrZSBjb250ZW50IG9mIG91ciBHcmlkIHByb3h5IGZpbGUuCg==
' | base64 -d > /tmp/x509up_u$UID
182 This line will be prepended to the remote environment script and will be
183 executed before anything else on the remote node: it will effectively
184 decode the Base64 string back to the proxy file and write it into the
185 `/tmp` directory. Note also that the first `$UID` is not escaped and
186 will be substituted *locally* with your user ID *on your client
187 machine*, while the second one has the dollar escaped (`\$UID`) and will
188 be substituted *remotely* with your user ID *on the remote node*.
190 > It is worth noting that the remote environment scripts will be sent to
191 > the remote node using a secure connection (SSH), thus there is no
192 > concern in placing sensitive user data there.
194 Installing the Virtual Analysis Facility client
195 -----------------------------------------------
197 ### Download the client from Git
199 The Virtual Analysis Facility client is available on
200 [GitHub](https://github.com/dberzano/virtual-analysis-facility):
203 git clone git://github.com/dberzano/virtual-analysis-facility.git /dest/dir
206 The client will be found in `/dest/dir/client/bin/vaf-enter`: it is
207 convenient to add it to the `$PATH` so that the users might simply start
208 it by typing `vaf-enter`.
210 ### Install the experiment's configuration files system-wide
212 A system administrator might find convenient to install the experiment
213 environment scripts system-wide.
215 Configuration scripts
for LHC experiments are shipped with the VAF
216 client and can be found in
217 `/
dest/dir/client/config-samples/<experiment_name>`. To
make them used
218 by
default by the VAF client, place them in the `/
dest/dir/etc`
222 rsync -
a /
dest/dir/client/config-samples/<experiment_name>/ /
dest/dir/etc/
225 Remember that the trailing
slash in the source directory
name has
a
226 meaning in `rsync` and must not be omitted.
228 > Remember that system-wide configuration files will always have
229 > precedence over user
's configuration files, so *don't place there
230 > files that are supposed to be provided by the user!*
232 Entering the Virtual Analysis Facility environment
233 --------------------------------------------------
235 The Virtual Analysis Facility client is
a wrapper around commands sent
236 to the remote host by means of PROOF on Demand
's `pod-remote`. The VAF
237 client takes care of setting up passwordless SSH from your client node
240 ### Getting the credentials
242 > You can skip this paragraph if the remote server wasn't configured
for
243 > HTTPS+SSH authentication.
245 In our
example we will assume that the remote server
's name is
246 `cloud-gw-213.to.infn.it`: substitute it with your remote endpoint.
248 First, check that you have your Grid certificate and private key
249 installed both in your browser and in the `~/.globus` directory of your
252 Point your browser to `https://cloud-gw-213.to.infn.it/auth/`: you'll
253 probably be asked
for a certificate to choose
for authentication. Pick
254 one and you
'll be presented with the following web page:
256 
258 The webpage clearly explains you what to do next.
260 ### Customizing user's configuration
262 Before entering the VAF environment, you should customize the user
's
263 configuration. How to do so depends on your experiment, but usually you
264 should essentially specify the version of the experiment's software you
267 For instance, in the CMS use
case, only one file is needed:
268 `~/.vaf/common.before`, which contains something like:
271 # Version of CMSSW (as reported by "scram list")
272 export VafCmsswVersion=
'CMSSW_5_3_9_sherpa2beta2'
275 ### Entering the VAF environment
277 Open
a terminal on your client machine (can be either your local
278 computer or
a remote user interface) and
type:
280 vaf-enter <username>@cloud-gw-213.to.infn.it
282 You
'll substitute `<username>` with the username that either your system
283 administrator or the web authentication (if you used it) provided you.
285 You'll be presented with
a neat shell which looks like the following:
287 Entering VAF environment: dberzano@cloud-gw-213.to.infn.it
288 Remember: you are still in
a shell on your local computer!
291 This shell runs on your local computer and it has the environment
294 PoD and PROOF workflow
295 ----------------------
297 > The following operations are valid inside the `vaf-enter` environment.
299 ### Start your PoD server
301 With PROOF on Demand, each user has the control of its own personal
302 PROOF cluster. The first thing to
do is to start the PoD server and the
303 PROOF master like
this:
307 A successful
output will be similar to:
309 ** Starting remote PoD server on dberzano@cloud-gw-213.to.infn.it:/cvmfs/sft.cern.ch/lcg/external/PoD/3.12/x86_64-slc5-gcc41-python24-boost1.53
310 ** Server is started. Use
"pod-info -sd" to check the status of the server.
312 ### Request and wait
for workers
314 Now the server is started but you don
't have any worker available. To
315 request for `<n>` workers, do:
319 To check how many workers became available for use:
323 To continuously update the check (`Ctrl-C` to terminate):
329 Updating every 5 seconds. Press Ctrl-C to stop monitoring...
336 To execute a command after a certain number of workers is available (in
337 the example we wait for 5 workers then start ROOT):
341 > Workers take some time before becoming available. Also, it is possible
342 > that not all the requested workers will be satisfied.
344 ### Start ROOT and use PROOF
346 When you are satisfied with the available number of active workers, you
347 may start your PROOF analysis. Start ROOT, and from its prompt connect
350 root [0] TProof::Open("pod://");
354 Starting master: opening connection ...
356 Opening connections to workers: OK (12 workers)
357 Setting up worker servers: OK (12 workers)
358 PROOF set to parallel mode (12 workers)
360 ### Stop or restart your PoD cluster
362 At the end of your session, remember to free the workers by stopping
367 > PoD will stop the PROOF master and the workers after detecting they've
368 > been idle
for a certain amount of time anyway, but it is
a good habit
369 > to stop it
for yourself when you
're finished using it, so that you are
370 > immediately freeing resources and let them be available for other
373 In case of a major PROOF failure (i.e., crash), you can simply restart
374 your personal PROOF cluster by running:
378 PoD will stop and restart the PROOF master. You'll need to request the
379 workers again at
this point.
constexpr std::array< decltype(std::declval< F >)(std::declval< int >))), N > make(F f)
void run(bool only_compile=false)
#define dest(otri, vertexptr)
static char * skip(char **buf, const char *delimiters)