computing
computing adminuserLinks
ROOT | ROOT class documentation |
HADES WebDB | ORACLE web interface for parameters etc. |
GEANT3 | GEANT3 documentation at CERN |
HadesGeant | |
Pluto | A Monte Carlo simulation tool for hadronic physics |
UrQMD | Frankfurt UrQMD |
HSD | Hadron-String-Dynamics transport Model |
HYDRA online Documentation
hydra-dev (ROOT docu) doxygen docu | Developer version of hydra2 |
HGEANT CONFIG INFO | help on the available options of the config file |
hydra common Parameter Intialisation, Geometry | HYDRA manuals |
hydra2 + hgeant2 source code on /cvmfs/hades.gsi.de/source | For browsing code using grep etc ... |
hydra_v8.21 (ROOT docu) | Last release of hydra |
Analysis workshop 2012 | workshop 2012 at GSI |
Analysis workshop 2016 | workshop 2016 at GSI |
Analysis workshop 2017 | workshop 2017 at GSI |
DST Production | Documentation of the dst production |
HYDRA2 Code Release Change Log
hydra2-4.9 | |
hydra2-4.9a | |
hydra2-4.9b | |
hydra2-4.9c | |
hydra2-4.9d | |
hydra2-4.9e | used for jul14/aug14 DST gen1 |
hydra2-4.9f | used for apr12 DST gen8 |
hyda2-4.9g | |
hydra2-4.9h | |
hydra2-4.9i | used for apr12 gen8 sim DST |
hydra2-4.9j | |
hydra2-4.9k | used for jul14+aug14 gen2 DST |
hydra2-4.9l | used for apr12 gen8 embedding DST |
hydra2-4.9m | |
hydra2-4.9n | |
hydra2-4.9o | |
hydra2-4.9p | |
hydra2-4.9q | ECAL + RICH700 integration |
hydra2-4.9r | |
hydra2-4.9s | |
hydra2-4.9u | |
hydra2-4.9v | |
hydra2-4.9w | |
hydra2-5.0 | |
hydra2-5.1 | |
hydra2-5.1a | |
hydra2-5.2 | |
hydra2-5.3 | |
hydra2-5.3a | |
hydra2-5.4 | |
hydra2-5.4a | |
hydra2-5.4b | |
hydra2-5.5 | |
hydra2-5.5a | mar19 gen4 |
HYDRA Tutorials
Tracking tutorial | tutorial 28.03.2013,GSI |
Batch farm
Batch farm adminuservirgo cluster
The batch farm runs on the virgo.hpc.gsi.de cluster. The batch jobs are managed by the SLURM (SL) scheduler. The documentation on the cluster/SLURM can be found here.
New Hades users have to be added to the users of account hades to be able to run jobs on the farm. The list of hades users is maintained by J.Markert@gsi.de
Some rules to work with virgo.hpc.gsi.de:
- Files are written to /lustre filesystem. Each Hades user owns a directory /lustre/hades/user/${USER} to work on the farm (NO BACKUP!)
- SLURM does not support other files systems than /lustre. batch
scripts can not use user's home dir nor any other filesystem - The hadessoftware is distributed to the batchfarm via
/cvmfs/hades.gsi.de/ (debian8) or /cvmfs/hadessoft.gsi.de (debian10)
The /cvmfs/hades.gsi.de or /cvmfs/hadessoft.gsi.de are available on desktop machines dependening on the OS version - The batch jobs can be submited to the farm from the virgo.hpc.gsi.de
cluster. This machines provides our software, the user ${HOME} and
a file sytem mount to /lustre. You can compile and test (run) your
programs here. - A set of example batch scripts for Pluto, UrQmd, HGeant, DSTs and user
analysis you can retrieve from
svn checkout https://subversion.gsi.de/hades/hydra2/trunk/scripts/batch/GE
The folders contain sendScript_SL.sh+jobScript_SL.sh (SL).
The general concept is to work with file lists as input to the sendScript, which
takes care about the sync from user homedir to the submission dir on
/lustre. The files in the list are splitted automatically to job arrays to
minimize the load on the scheduler. The sendScript finally calls the sbatch
command of SLURM to submit the job. The jobScript is the part which
runs on the batch nodes.
Working on the batch farm of GSI:
virgo cluster documentation:
https://hpc.gsi.de/virgo/user-guide/storage.html
ssh key autentication
https://hpc.gsi.de/virgo/user-guide/access/key-authentication.html
Since 1st of September 2021 we have two software environments
available:
debian10: neweset software releases compiled with root 6.24.02 and gcc 8.3
use this for current beam mar19 analysis of dst files. The login to
the farm will use vae23.hpc.gsi.de (debian10,ROOT6) and no special action
for handling the container environment with singularity is needed.
It works as the not anymore vailable virgo-debian8.hpc.gsi.de
debian8: old software releases compiled with root 5.34.34 and gcc 4.9.2
This covers all versions up to hydra2-5.6a used for dst productions
of apr12,aug14,jul14, mar19 (gen5) pre 1st of September 2021
use this to get exact same behaviour as before the switch and do
not want to change any software. To make the environment work
some handling of the singularity options are needed to start and
submit jobs using this container environment. It will be described
below.
#############################################################
virgo cluster usage information:
virgo.hpc.gsi.de(bare bone to start your container),
vae23.gsi.de (debian10 container started at login)
Problems and tips:
- if your login to virgo.hpc.gsi.de
does not work although you have/created ssh keys
(https://hpc.gsi.de/virgo/access/key_authentication.html),
try to cleanup .ssh/authorized_keys from other
keys which need other credentials and might cause
problems. - ksh does not work with the login,
use bash (accounts-service@gsi.de).
ksh is not installed at virgo and will
lead to permission denied statements without further
explanation. accounts-service@gsi.de will
be responsable to change your login shell.
It will take a while to see the changes, syncs
are performed once per day at 22:00. X does not work at virgo at the moment :
a. use lx-pool.gsi.de (or any desktop machine) to look at /lustre output
use sshfs to mount /lustre to any linux machine which
has no mount of /lustre already.
To mount /lustre ussshfs user@virgo.hpc.gsi.de:/lustre mymountpoint sshfs user@lustre.hpc.gsi.de:/lustre mymountpoint
vae23.hpc.gsi.de will not work with sshfs
since the machine will be closed after the sshfs command has been returned.
our sendScript_SL.sh for batch submission before using singularity containers
needs a small modification for virgo:# from inside virgo container command="--array=1-${stop} ${resources} -D ${submissiondir} -- output=${pathoutputlog}/slurm-%A_%a.out -- ${jobscript} ${submissiondir}/${jobarrayFile} ${pathoutputlog} ${arrayoffset}" for virgo there is an additional " -- " between slurm options and the user scripts+parameters of the script which should be started at the farm. This is needed since slurm seperates from the container which is started. The container version is choosen from the submit host automatically. #############################################################
debian10 container login:
ssh username@vae23.hpc.gsi.de (virgo)
This command will start a container based instance
of gsi debian10.
debian8 container login:
working environment debian8:
1. login ssh username@virgo.hpc.gsi.de.
2. start debian8 container:
. start_debian8.sh
start_debian8.sh:
//-------------------------------
export SSHD_CONTAINER_DEFAULT="/cvmfs/vae.gsi.de/debian8/containers/user_container-production.sif"
export SSHD_CONTAINER_OPTIONS="--bind /etc/slurm,/var/run/munge,/var/spool/slurm,/var/lib/sss/pipes/nss,/cvmfs/vae.gsi.de,/cvmfs/hadessoft.gsi.de/install/debian8/install:/cvmfs/hades.gsi.de/install,/cvmfs/hadessoft.gsi.de/param:/cvmfs/hades.gsi.de/param,/cvmfs/hadessoft.gsi.de/install/debian8/oracle:/cvmfs/it.gsi.de/oracle"
shell=$(getent passwd $USER | cut -d : -f 7)
STARTUP_COMMAND=$(cat << EOF
srun() { srun-nowrap --singularity-no-bind-defaults "\$@"; }
sbatch() { sbatch-nowrap --singularity-no-bind-defaults "\$@"; }
export -f srun sbatch
$shell -l
EOF
)
export SINGULARITYENV_PS1="\u@\h:\w > "
export SINGULARITYENV_SLURM_SINGULARITY_CONTAINER=$SSHD_CONTAINER_DEFAULT
test -f /etc/motd && cat /etc/motd
echo Container launched: $(realpath $SSHD_CONTAINER_DEFAULT)
exec singularity exec $SSHD_CONTAINER_OPTIONS $SSHD_CONTAINER_DEFAULT $shell -c "$STARTUP_COMMAND"
//-------------------------------
This environment will allow to compile and use code. SL_mon.pl
and access the SLURM commands will not work on debian8.
virgo.hpc.gsi.de + vae23.hpc.gsi.de allow to use SL_mon.pl+SLURM
3. our sendScript_SL.sh for batch submission works
for the vae23.hpc.gsi.de login. NO ADDITIONAL wrap.sh NEEDED
(see below)!
------------------------------------------------------------
submission of batchjobs for debian8 virgo3:
1. login to virgo.hpc.gsi.de
2. modifiy your sendScript_SL.sh to use wrap.sh
to start the debian8 container on the farm.
wrap.sh:
//-------------------------------
#!/bin/bash
jobscript=$1
jobarrayFile=$2
pathoutputlog=$3
arrayoffset=$4
singularity exec \
-B /cvmfs/hadessoft.gsi.de/install/debian8/install:/cvmfs/hades.gsi.de/install \
-B /cvmfs/hadessoft.gsi.de/param:/cvmfs/hades.gsi.de/param \
-B /cvmfs/hadessoft.gsi.de/install/debian8/oracle:/cvmfs/it.gsi.de/oracle \
-B /lustre \
/cvmfs/vae.gsi.de/debian8/containers/user_container-production.sif ${jobscript} ${jobarrayFile} ${pathoutputlog} ${arrayoffset}
//-------------------------------
in sendScrip_SL.sh in the lower part where the sbatch command is build:
#virgo bare bone submit using wrap.sh
wrap=./wrap.sh
command="--array=1-${stop} ${resources} -D ${submissiondir} --output=${pathoutputlog}/slurm-%A_%a.out -- ${wrap} ${jobscript} ${submissiondir}/${jobarrayFile} ${pathoutputlog} ${arrayoffset}"
SLURM tips:
The most relevant commands to work with SL:
sbatch : sbatch submits a batch script to SLURM.
squeue : used to view job and job step information for jobs managed by SLURM.
scancel : used to signal or cancel jobs, job arrays or job steps.
sinfo : used to view partition and node information for a system running SLURM.
sreport : used to generate reports of job usage and cluster utilization for
SLURM jobs saved to the SLURM Database.
scontrol : used to view or modify Slurm configuration including: job,
job step, node, partition,reservation, and overall system configuration.
Examples:
squeue -u user : show all jobs of user
squeue -t R : show jobs in a certain state (PENDING (PD),
RUNNING (R), SUSPENDED (S),COMPLETING (CG),
COMPLETED (CD), CONFIGURING (CF),
CANCELLED (CA),FAILED (F), TIMEOUT (TO),
PREEMPTED (PR), BOOT_FAIL (BF) ,
NODE_FAIL (NF) and SPECIAL_EXIT (SE))
scancel -u user : cancel all jobs of user user
scancel jobid : cancel job with jobid
scancel -t PD -u <username> : cancel all pending jobs of a user
scontrol show job -d <jobid> : show detailed info about a job
scontrol hold <jobid> : suspend a job
scontrol resume <jobid> : resume a suspended job
Disk space
Disk space adminuserDATA on /lustre/hades
SVN repositories
SVN repositories adminuserThe Hades Subversion (svn , http://subversion.tigris.org/) repositories are located at the GSI web-server. Some information about subversion at GSI can be found at http://wiki.gsi.de/Linux/SubVersion The access authentification uses the ORACLE data base of GSI. Read permission is granted anonymously. For commiting code you have to have an account at http://www-oracle.gsi.de/ . This is the same account as used for the documentation of working time or the radiation savety. You do not need to have GSI linux or windows account to get an user account. The user name has to added to the subversion access management. Mail your user name and which directries you want to work with to j.markert@gsi.de.
Currently available repositories:
- https://subversion.gsi.de/hades/hydra : All history up to last stable version v8_21 (read permission only)
- https://subversion.gsi.de/hades/hgeant : All history up to last stable version v8_21 (read permission only)
- https://subversion.gsi.de/hades/hydraTrans : Transition repository to restructure all libraries and prepare for next runs from 2010 on (closed since 22.06.2011, only read permission)
- https://subversion.gsi.de/hades/hydra2 : Final repository after transition period
- https://subversion.gsi.de/hades/hgeant2 : Final repository after transition period
- https://subversion.gsi.de/hades/pluto : Final (restricted access)
- https://subversion.gsi.de/hades/fwdetsn : transition repoitory for Forward Detector (FWDET) development
- https://subversion.gsi.de/hades/publications : repository for publications (restricted access)
(Table of content on the web)
Web-front ends of the available repositories:
The GSI IT provides redmine , a web-frontend for the subversion repositories, running on the apache webserver. This web-frontend replaces our old CVS frontend.
hades svn redmine
Basic use of Subversion:
For the documentaion of subversion see http://svnbook.red-bean.com/
--------------------------------------------------------------------------------- # checkout a repository # get full repository from trunk (main branch) into a folder hydraTrans svn co https://subversion.gsi.de/hades/hydraTrans/trunk hydraTrans # get a directory from the repository trunk (main branch) into a folder hydraTrans svn co https://subversion.gsi.de/hades/hydraTrans/trunk/mdc hydraTrans/mdc --------------------------------------------------------------------------------- # view all commands svn help # view help on commands svn help status --------------------------------------------------------------------------------- # most usefull commands # to work on the local working copy [file] means filename is optional. In this case the commands apply to all files in the current directory svn stat [file] // show local changes (stat=status) stat -u [file] // show local changes and changes on the server (-u == in update mode) svn diff [file] // show modification of a file against a revision svn add file // schedule a new file for adding. Needs commit afterwards to send it to the repository svn update [file] // update file to newest revision svn commit -m "your comment" [file] // send file [or all modified files] to repository (requires access permissions). takes the user name from checkout log svn --username yourname commit -m "your comment" [file] // send file [or all modified files] as agiven svn user to repository (requires access permissions). // helpful to commit from a checkout dir of another user tkdiff file // show graphical diff of file to svn base revision tkdiff file -r head // show graphical diff of file to newest svn head revision
HYDRA build system
HYDRA build system adminuserBase components of the Make system
The Hades build system is compiled of the following Makefiles:
hades.def.mk
Contains all definitions to build any object
hades.rules.mk
Contains all rules to build a set of Hydra libraries (so called global build)
hades.module.mk
Contains all rules to build a single Hydra library
hades.app.mk
Contains all rules to build an application based on Hydra
Use it to build any library or application in the Hades software environment based on C, Fortran 77, C++ or Oracle PL/SQL. This has nothing to do with loading libraries in Root's CINT - which is a completely different subject. These makefiles are part of the Hydra distribution and saved in the admin directory of the code tree.
HowTo use existing Makefiles
Variables and settings
Before using the Makefiles to build any Hades software, setup the shell environment. The default settings related to third party software are all defined at the beginning of hades.def.mk. These frequently used variables have an influence on building Hydra and related software; they can be set in the Makefiles and on command line, or even in both locations:
Call make with the following targets (without any target, 'all' is taken as the default one):
HADDIR
Absolute path to the directory where Hydra is installed - needed if Hydra is not build completely from scratch. An installation consists of the directories
$HADDIR/lib
and
$HADDIR/include
containing all libraries and their header files as well as directory
$HADDIR/macros
which keeps the related rootlog*.C macros for loading all Hydra modules. In
$HADDIR
itself, the makefiles hades.*.mk are installed.
MYHADDIR
Absolute path to the directory where private and additional Hydra module versions are installed. Also here, an installation consists of the directories
$MYHADDIR/lib
and
$MYHADDIR/include
containing all private libraries and their header files.
BUILD_DIR
Absolute path to the directory where all compiler output files will be located. Default:
./build in case one builds a set of Hydra modules or an application;
../build
in case one build a single module.
INSTALL_DIR
Absolute path to the directory where all libraries and header files will be installed. Default:
Always
.
in case of applications.
For one of several modules:
$MYHADDIR
- if set.
Otherwise
./install
in case one builds modules using a global makefile and
../install
in case one build a single module in the module directory (this refers to the same directory).
USES_RFIO, USES_ORACLE, USES_GFORTRAN, USES_CERNLIB and USES_X11
To build Hydra independent from those 3rd party software set the related flag explicitly to
no
and remove correlated modules in your global Makefile. Otherwise it depends on the default settings defined in the different module Makefiles, which packages are finally used.
Use Cases and Boundary Conditions
Building Hydra completely from scratch:
- Somewhere, there is a directory - let's call it hydra here - which must contain a global Hydra Makefile (which builds all modules), and the hades.*.mk Makefiles. All module directories e.g. base or pid are sub-directories to hydra.
Set HADDIR to hydra, e.g.
export HADDIR=/path/to/hydra
- Unset MYHADDIR - to inhibit mixture of versions, because there shouldn't be private modules, yet.
- These steps are done automatically in the global default Makefile.
Now do:
make; make install INSTALL_DIR=/wherever/you/want/to/have/it
Building a set of private Hydra modules:
- Set HADDIR to the global Hydra installation location, where basically all modules should be installed.
- Set MYHADDIR to your private installation location if you have already some local modules you want to use.
- Write your own reduced global Makefile (see below).
- In the directory, to which the modules to be compiled are sub-directories, do:
make; make install
Building a single module:
- Set HADDIR to the global Hydra installation location, where basically all modules should be installed.
- Set MYHADDIR to your private installation location if you have already some local modules you want to use.
Either do
make; make install
in the module directory, or execute
make MODULES=dir-name; make install MODULES=dir-name
using your global Makefile - MODULES=dir-name is only needed, if you want to override the list of modules which is defined in your global Makefile.
Building an application:
- Set HADDIR to the global Hydra installation location, where basically all modules should be installed.
- Set MYHADDIR to your private installation location if you have already some local modules.
In the application directory, do:
make; make install INSTALL_DIR=/wherever/you/want/to/have/it
, typically
INSTALL_DIR=~/bin
Take care, that you don't mix different library (Hydra) versions while building modules!
- All modules must be sub-directories of the directory which contains the global Makefile (if one was used at all). This Makefile can then be used to build all those modules.
All modules and applications keep dependencies to libraries internally; including the full path as it was found while linking the library ("rpath" mechanism). This can be checked using the
ldd
command.
To use just global modules (from /misc/hadessoftware/install) do:
unset MYHADDIR
- To use just local (private) modules install all in MYHADDIR.
- Local or private modules are always preferred of global ones (MYHADDIR always beats HADDIR).
To temporarily use a module from global location, deinstall it locally:
make deinstall MODULES="dir-name ..."
The re-installation is very fast, since it is just copied from the build directory.
- The hades.*.mk Makefiles are placed in HADDIR by default (see below).
Targets
Call make with the following targets (without any target, 'all' is taken as the default one):
all
Is an alias for 'build'
check
This target is independent of all other targets and checks the filename and directory structure of your project.
depend
Create the dependency files. If this target was not executed, then the files are created during the 'build' step, automatically. Creating the dependencies is much faster than building objects, and will therefore uncover some possible problems much faster.
build
Build all libraries/modules/applications
install
Install the libraries/applications and corresponding header files such, that they are ready for usage by CINT.
doc
Creates the HTML class documentation
deinstall
Antagonist of
install
clean
Deletes all files but dependencies and sources
distclean
Deletes all files but sources
How to write new Makefiles
make -s
Silent Mode - don't echo commands; just explicit echo commands, as well as warnings and errors are shown
make -n
No Operation Mode - echo all commands but don't execute them. Good debugging feature!
make -jX
Parallel Build - process X modules/files in parallel (it turned out, that in case one builds a Hydra library, 2 jobs per CPU/Core is the optimum, otherwise 1 job per CPU/Core is enought)
make -f file
Use 'file' as input makefile
make -p
Print database of variables and rules - nice feature for debugging Makefiles (Hint: Better redirect the bulk output to a file)
- += Append a value to a variable
- ?= Sets a variable to a value only if it's not already set: This gives you the option to override settings via shell environment - variables set on the command line always have the highest precedence.
- := Sets a variable by evaluating the right side immediately
- = Sets a variable by evaluation the right side in the moment the variable is used (this give you the possibility to bequeath changes of settings via different variables)
Example of a global Makefile
MODULES ?= base mdc ora include $(HADDIR)/hades.def.mk ### possibly override default or append new definitons here include $(HADDIR)/hades.rules.mk ### possibly override default or append new rules here |
These lines describe a Makefile which builds the modules 'base', 'mdc' and 'ora'. Each module must have its own Makefile. If one uses '?=' as assignment operator, then one has the possibility to override the list of modules to be build on command line, e.g. with "make MODULES=mdc" just the module 'mdc' is build.
While building Hydra not completely from scratch, you will need at least HADDIR pointing to the correct installation!
The global makefile is a good place to introduce hard-wired private settings. It's better not to change module makefiles, e.g. since temporary changes in several modules might lead to errors if they were forgotten to be changed back or synchronized. Here is an example of some common manipulations of default settings/behaviour:
MODULES ?= base mdc ora BUILD_DIR := /tmp INSTALL_DIR ?= /home/my/install USES_RFIO := yes USES_GFORTRAN := yes USES_CERNLIB := yes USES_ORACLE := yes include $(HADDIR)/hades.def.mk .PHONY: default default: build install include $(HADDIR)/hades.rules.mk |
If you type
make
, this makefile actually does the same things as described above, but it uses /tmp as build directory for all modules, an alternativ installation directory which can be altered via environment settings and does execute the installation automatically after a successful compilation of the modules.
Example of a module Makefile
LIB_NAME := MyLib SOURCE_FILES := file1.cc file2.pc file3.c file4.f USES_RFIO := yes USES_GFORTRAN := yes USES_CERNLIB := yes USES_ORACLE := yes include $(HADDIR)/hades.def.mk ### possibly override default or append new definitons here include $(HADDIR)/hades.module.mk ### possibly override default or append new rules here |
These lines describe a Makefile which builds the library 'libMyLib.so'. This library is made of 4 objects written in C++, Oracle PL/SQL, C, and Fortran 77 (in this order). It uses RFIO explicitly (currently only done by modules base, rfio and htools - otherwise this flag is not needed), parts of CERN library and Oracle. Here, it is not necessary to set USES_ORACLE, since the build system recognizes the usage of an Oracle precompiler file (ending '.pc'). However, it is a good style to use it explicitly if needed.
Example of an application Makefile
APP_NAME := myapp SOURCE_FILES := file1.cc file2.pc file3.c file4.f USES_RFIO := yes USES_GFORTRAN := yes USES_CERNLIB := yes USES_ORACLE := yes include $(HADDIR)/hades.def.mk # override default list of linked Hydra libraries - before they can act on the rules HYDRA_LIBS := -lHydra -lMdc include $(HADDIR)/hades.app.mk ### possibly override default or append new rules here |
These lines describe a Makefile which builds the application 'myapp'. The program is made of 4 objects written in C++, Oracle PL/SQL, C, and Fortran 77 (in this order). It uses RFIO explicitly (currently only done by modules base, rfio, htools and the DST macros - otherwise this flag is not needed), parts of CERN library and only the Hydra libraries 'Hydra' and 'mdc' - otherwise almost all libraries are used by default (see hydra.def.mk). Here, it is not necessary to set USES_ORACLE, since the build system recognizes the usage of an Oracle precompiler file (ending '.pc'). However, it is a good style to use it explicitly if needed.
Software installation locations
Software installation locations adminuserSoftware distribution
The software for the the desktop machines (debian10), lxpool.gsi.de as well as the interactive batch nodes and virgo3 cluster is hosted and distributed via cvmfs (Cern Virtual Machine File System). cvmfs distributes the files to local caches in in each node. When using cvmfs for the first time it can take a while until the files are copied. Once copied, the software should load faster. For debian8 and debian10 software installations for Hydra, Hgeant and more are provided.
From the shell
/cvmfs/hades.gsi.de/install (debian8)
/cvmfs/hadessoft.gsi.de/install (debian10)
shows the installations available at GSI.
The install path keeps the root versions used and in the corresponding versions path
the depending hydra, hgeant and pluto installations. The corresponding environment
scripts (defall.sh) can be found in the application directory. The software is compiled
native 64bit. 32bit compatibility mode is not supported anylonger on the GridEngine
batchfarm.
example:
debian8:
/cvmfs/hades.gsi.de/install/5.34.34/hydra2-6.5/defall.sh /cvmfs/hades.gsi.de/param // parameter files /cvmfs/hades.gsi.de/source // source code of hydra2+hgeant2 for browsing
debian10:
/cvmf/cvmfs/hadessoft.gsi.de/install/debian10/6.24.02/hydra2-6.5/defall.sh
/cvmfs/hadessoft.gsi.de/param // parameter files /cvmfs/hadessoft.gsi.de/source // source code of hydra2+hgeant2 for browsing
Old software
In 2012 hydra was upgraded to hydra2. hydra2 is not compatible to data prior 2012. The
latest ported version of hydra and hgeant has are installed on the /cvmfs/hades.gsi.de
and can be used to work on the current operating systems build ontop of ROOT 5.34.34.
/cvmfs/hades.gsi.de/install/5.34.34/old (debian8)