Getting Started

Requirements

Plugin Support

SPHGR is configured by default to work with Rockstar-Galaxies (DM-halo groups incl. gas&stars) and SKID (baryonic groupings). You must download a custom version of RS-G whose link is below.

Compilation

In this section I will briefly describe how to configure your system for a default SPHGR run. This assumes you are using Rockstar-Galaxies & SKID as your DM+Baryon group finders. But first, python!

python

You will need a working python installation that includes numpy, cython, scipy, and cPickle. Most all-in-one distros (EPD/anaconda) should come with most of these components. You can also install mpi4py if you want to run the data analysis in parallel.

pyGadgetReader

SPHGR reads gadget files via pygadgetreader; this is a custom python backend that allows you to easily read in Gadget files. It currently supports Gadget type 1, TIPSY, and HDF5 binary files. Download pygadgetreader from bitbucket and compile & install via the following commands:

> python setup.py install

See the ecnlosed readme within pygadgetreader for further details or installation tips/suggestions.

ROCKSTAR-GALAXIES

Currently, you must download a modified version which allows for SPHGR to do it’s job much quicker. The custom fork can be found via this bitbucket link. We then need to compile both rockstar-galaxies and the parents utility via:

> make
> make parents

HDF5: edit the Makefile and ensure that you specify your HDF5 include and lib paths in the HDF5_FLAGS variable. Then compile in the following manner:

> make with_hdf5
> make parents

SKID

In order to use SKID you must download the customized version found here; it has been modified to read Gadget type1-binaries and HDF5 files directly. Modify the Makefile and comment/uncomment -DGADGET and/or -DHDF5 depending on what file types you will be using. The compilation should then be as simple as:

> make
HDF5: ensure that the HDF5INCL and HDF5LIB directories within the Makefile point to your HDF5 installtion before compilation.

Configuration

Before we move on to script configurations, we need to acomplish two tasks:

  1. copy over template scripts

    I have written a small script in the root directory called setup_templates.py. Run this once, and it will put the user configurable configs/config.py, configs/sim_config.py, and reduction.py scripts in place. Further updates to these files will be located in the configs/templates directory.

  2. build the cython modules.

    I have included a script called make_extensions.sh in the root directory of SPHGR. If cython is properly installed and up to date this should run without a hitch from the suite’s root directory.

Now we can start configuring the configuration files located in the configs folder. The ones we will be focusing on is configs/config.py & configs/sim_config.py.

configs/config.py

This file contains global parameters related to running the code. The first few variables should be fairly self explanatory. But the most important ones are the variables SKID, RS, RS_FP, MPI, and PY. Edit these so that each points directly to the executable of the respective program.

note: If you have both MPI and mpi4py set USE_MPI=1 to allow for parallel analysis.

configs/sim_config.py

This file is slightly tricky; it holds the information needed to find your snapshots. The code allows for the customization of how your snaps are named and stored. The first thing you’ll want to edit is the SIMBASEDIR variable which farther down in the file; this tells the code where to start looking. SNAPDIR dictates how your snapshot directory is formatted, and SNAPNAME dictates how your snapshot files are formatted. The easiest way to illustrate how this works is by example. If we have the following configuration:

SIMBASEDIR    = '/Users/bob'
SNAPDIR       = '%s/%s'
SNAPDIR_OPTS  = '(%s,%s)' % (DIR_OPTS.get(0), DIR_OPTS.get(1))
SNAPNAME      = '%s/snap_%s_%03d'
SNAPNAME_OPTS = '(%s,%s,%s)' % (SNAP_OPTS.get(0),SNAP_OPTS.get(1),SNAP_OPTS.get(2))

then when the code is executed the SNAPDIR and SNAPNAME variables will be defined like so:

SNAPDIR  = '/Users/bob/%s'   % (SNAPPRE)
SNAPNAME = '%s/snap_%s_%03d' % (SNAPDIR,SNAPPRE,SNAPNUM)

You will define SNAPPRE and SNAPNUM later when actually executing the code so don’t worry about these for now. What you will notice is that I set SNAPDIR=’%s/%s’ meaning that it will be composed of two strings (string/string). These strings are set via SNAPDIR_OPTS, the .get(N) values are taken from the dict at the top of the configs/sim_config.py file:

SNAP_OPTS = {0:'SNAPDIRS[i]', 1:'SNAPPRE', 2:'SNAPNUMS[j]'}
DIR_OPTS  = {0:'SIMBASEDIR', 1:'SNAPPRE', 2:'DIRPRES[i]'}

These dictionaries use the .get() function to return the string for a given index. I know this is a bit confusing at first, but it was the only way I could implement it so that your snapshot locations and names were completely customizable. It may take some tinkering, but once you set it up you should be good to go.

The DIRPRES variable allows for you to stick snapshots with the same SNAPPRE under different subdirectories. Below is additional explanations for each property within the above dict:

  • SNAP_OPTS:
    1. SNAPDIRS[i] = represents snapshot directories
    2. SNAPPRE = snap prefix, usually N256L16 or similar (specified later)
    3. SNAPNUMS[j] = snapshot numbers (specified later)
  • DIR_OPTS:
    1. SIMBASEDIR = base snapshot directory - specified in sim_config.py
    2. SNAPPRE = snap prefix, usually N256L16 or similar (specified later)
    3. DIRPRES[i] = prefix for subdirectories

Next we need to define SNAPPRE and DIRPRES. The first should be fairly self explanatory, the second specifies any sub directories that may exist within your snapshot directory. If you do not have any subdirectories under your snapshot folder simply set:

DIRPRES = ['.']

Last is the parameters ZOOM_RES and ZOOM_LOWRES*. The first sets the effective resolutuion of your simulation if it is detected to be a zoom; don’t worry about this for cosmological boxes. The second tells SPHGR which particles to consider low-resolution elements. The number is calculated via 2^particleType+2^particleType, so if particle types 2,3,&5 are low-resolution particles we would set this value to 2^2+2^3+2^5=44.

EXAMPLE

Let’s say you are working with a simulation snapshot whose full path is the following:

/home/rthompson/N512L32/snap_N512L32_050

This is what the configs/sim_config.py variables should look like for this particular sim:

SIMBASEDIR    = '/home/rthompson'
SNAPDIR       = '%s/%s'
SNAPDIR_OPTS  = '(%s,%s)' % (DIR_OPTS.get(0), DIR_OPTS.get(1))
SNAPNAME      = '%s/snap_%s_%03d'
SNAPNAME_OPTS = '(%s,%s,%s)' % (SNAP_OPTS.get(0),SNAP_OPTS.get(1),SNAP_OPTS.get(2))
SNAPPRE       = ['N512L32']
DIRPRES       = ['.']

reduction.py

Once configs/config.py and configs/sim_config.py are edited appropriately we need to setup the reduction script parameters.

  • BEG/END: this determines what snap numbers the code will analyze. When working with a linear series of snapshots, say 0-100, set BEG=0, END=100. If you are working with a single snapshot set BEG=[N] and END will be ignored. Lastly, if your snapshots are not linearly spaced then you can set BEG=[a,c,k,l] and it will again ignore the value specified for END.

  • SKIPRAN: code will skip the analysis if the result already exists. 0=do not skip, 1=skip

  • PROMPT: code will prompt if files are not found, useful to disable if submitting jobs.

  • RUN_XXX: allows you to switch on/off specific analysis processes. 0=do not run, 1=run.

    • SKID - galaxy group finding
    • ROCKSTAR - halo finding
    • MEMBERSEARCH - main analysis
    • PROGEN - most massive progenitor search
    • LOSER - Romeel’s LOSER code
  • MPI_NP: Sets the number of processors to use for parallel analysis via mpi4py.

  • OMP_NP: Number of threads to spawn for misc. calculations

NOTE: Only use PROGEN if you are planning on examining a sequence of snapshots, it requires more than one to work as it leapfrogs backwards looking for the previous sequential snapshot (SNAPNUM-1).

Running

Once the above steps are complete you can execute the reduction script via:

> python reduction.py

and python should take care of the rest. Once this process is complete most of the analysis should take place via scripts located in the analysis directory.