Running SSI using Rubin pipelines (move this)#

Owners: PatriciaLarsen @plarsen
Last Verifed to Run: 2024-06-04 (by @plarsen)

This notebook will show you how to run the Rubin Observatory’s pipeline tools on NERSC. We do this using synthetic source injection as an example, however note that the analysis tools framework discussed in the last tutorial can also be run in this way.

Learning objectives:

After going through this notebook, you should be able to:

  1. Run up-to-date Rubin pipelines on data at NERSC

Important Notes:

  • This document is here in case we have access to small amounts of the full dataset on nersc for testing purposes, we do not expect the full dataset to be transferred over

  • While these instructions will be up-to-date, no assumptions should be made about the availability of data or the ability to run the pipelines on outdated data.

Logistics: This notebook is intended to be run through the JupyterHub NERSC interface available here: https://jupyter.nersc.gov. To setup your NERSC environment, please follow the instructions available here: https://confluence.slac.stanford.edu/display/LSSTDESC/Using+Jupyter+at+NERSC

Kernel: Please use desc-stack-weekly-latest for this tutorial

Configuration steps#

The butler-based data we have on NERSC currently is detailed on this page (search for Data & Computing Resources if the URL doesn’t load): https://confluence.slac.stanford.edu/pages/viewpage.action?pageId=437946872

Currently this includes DC2 data, some reprocessing tests and Roman-Rubin data. If you need to use the DC2 data you will need to follow the instructions here: https://confluence.slac.stanford.edu/display/LSSTDESC/DC2+Data+with+the+Gen3+Butler, including getting permission to access the data.

For the purposes of this demonstration I’m doing SSI injection into the DC2 data

Setup steps#

If you’re doing the measurement or reprocessing steps you probably want to be on a compute node, so for that let’s get an interactive job. This is how you get one interactive node with a max time of 4 hours charging to the DESC account

salloc --nodes 1 --qos interactive --time 4:00:00 --constraint cpu --account=m1727

Once on the node, the way I normally get the right environment with the version of the pipelines I want is to use a shifter image, like this:

shifter --image=lsstsqre/centos:7-stack-lsst_distrib-w_2024_10 /bin/bash
source /opt/lsst/software/stack/loadLSST.bash
setup lsst_distrib

This doesn’t work for batch jobs using the BPS, the documentation will be updated to include those instructions.

If this one isn’t the right shifter image you can search for one using

shifterimg images | grep "lsstsqre/centos:7-stack-lsst_distrib-w_*"

and see what is available

Have a look here: https://confluence.slac.stanford.edu/pages/viewpage.action?spaceKey=LSSTDESC&title=NERSC+Software+Installations#NERSCSoftwareInstallations-Shifter https://pipelines.lsst.io/install/docker.html#docker-tags https://pipelines.lsst.io/install/newinstall.html#newinstall-other-tags for more information on shifter images

First steps#

Now you’re in the right environment and have lsst_distrib set up you can query collections from the command line e.g.

butler query-collections /global/cfs/cdirs/lsst/production/gen3/DC2/Run2.2i/repo/butler-schema-migration.yaml *coadds*

And make a plot of the steps in the pipeline yaml files like this:

pipetask build -p test-injection-new.yaml#step1 --show task-graph

Starting SSI: Set up pipeline configuration files and source injection catalog#

We can make an injection pipeline on deepCoadd images like so (note that you need the $DRP_PIPE_DIR set to the right version of the pipelines which is why we do it this way):

make_injection_pipeline -t deepCoadd -r $DRP_PIPE_DIR/pipelines/_ingredients/LSSTCam-imSim/DRP.yaml -f test-injection-new.yaml

We can then make a test injection catalog

generate_injection_catalog -a 53.04 54.78  -d -32.73 -31.24 -n 10 -p source_type Sersic -p mag 15 17 19 -p n 1 2 4 -p half_light_radius 5 10 -f my_injection_catalog_new.csv

And ingest it into the butler

ingest_injection_catalog -b /global/cfs/cdirs/lsst/production/gen3/DC2/Run2.2i/repo/butler-schema-migration.yaml -i my_injection_catalog_new.csv g r i z y u -o u/plarsen/ssi_test_schemamigration2

Now we are theoretically ready to create the injected images. However if you measure catalogs from these you’ll hit errors.. So we add an extra step

Create missing data#

We need to re-run the earlier steps (yes step 1 is necessary, you have to go all the way back to ISR). With images that have been more recently processed this won’t be necessary.

To do this we run (add AND patch=29 to the masks to make this faster)

pipetask --long-log --log-file test.log run -b '/global/cfs/cdirs/lsst/production/gen3/DC2/Run2.2i/repo/butler-schema-migration.yaml' -i 'u/descdm/coadds_Y1_4430','u/plarsen/ssi_test_schemamigration2' -p test-injection-new.yaml#step1 -d "band='g' AND tract=4430 AND skymap='DC2'" -o 'u/plarsen/ssi_test_injected_schemamigration2' --register-dataset-types -j64

Notes

  • register-dataset-types is necessary the first time you run this, because we are creating new types of data which may not be registered

  • -j64 runs 64 processes, this’ll speed things up a lot

  • We’re including the output into the input here for a reason - if you get partway through and it needs to reprocess this means it can see the created outputs and reduce the amount of work (I’m not entirely sure about this but I believe adding it is the sensible option). It also contains the injection catalog which will be used to inject the images.

  • If you’re testing then add AND patch=29 to make this faster

Running as a batch job#

Create a submission script, mine looks something like this (I suspect I don’t have to specify the shifter image twice but I’m not sure which is necessary so I’ve left it). Run_pipetask.sh has to have executable permissions, and you would need an srun in front of the last line if you want to run multiple tasks.

Note here I’m using the shared queue with only 8 cpus per task. This is because the step I’m running doesn’t have much threading or take up much memory, so I can use less of the machine and reduce our charging while running for longer. Use with caution!

#!/bin/bash
#SBATCH -A m1727
#SBATCH -C cpu
#SBATCH -q shared
#SBATCH -t 12:00:00
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task 8
#SBATCH --image=lsstsqre/centos:7-stack-lsst_distrib-w_2024_10


export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=8

shifter --image=lsstsqre/centos:7-stack-lsst_distrib-w_2024_10 ./run_pipetask.sh

My run_pipetask.sh file looks like this (and yes the #!/bin/sh is necessary):

#!/bin/sh

source /opt/lsst/software/stack/loadLSST.bash
setup lsst_distrib


pipetask --long-log --log-file test.log run -b '/global/cfs/cdirs/lsst/production/gen3/DC2/Run2.2i/repo/butler-schema-migration.yaml' -i 'u/descdm/coadds_Y1_4430','u/plarsen/ssi_test_schemamigration2' -p test-injection-new.yaml#step3c -d "band='g' AND tract=4430 AND skymap='DC2'" -o 'u/plarsen/ssi_test_injected_schemamigration2' --register-dataset-types -j8

Then submit this in the normal way with

sbatch run_injection.sh

And to look at the status use

squeue –me

Note:

  • for step 3 to run correctly I had to remove links to truth matching from your pipeline yaml file , as I didn’t have the truth catalogs ingested into the butler. Once a fix is found for this we can add it to the notes.

Now continue to run

pipetask --long-log --log-file test.log run -b '/global/cfs/cdirs/lsst/production/gen3/DC2/Run2.2i/repo/butler-schema-migration.yaml' -i 'u/descdm/coadds_Y1_4430','u/plarsen/ssi_test_schemamigration2' -p test-injection-new.yaml#step2 -d "band='g' AND tract=4430 AND skymap='DC2'" -o 'u/plarsen/ssi_test_injected_schemamigration2' --register-dataset-types -j64

And for step3, it will run the injection as part of this, and then measure object catalogs out of it, watch out though things like the deblender take a long time to run!

pipetask --long-log --log-file test.log run -b '/global/cfs/cdirs/lsst/production/gen3/DC2/Run2.2i/repo/butler-schema-migration.yaml' -i 'u/descdm/coadds_Y1_4430','u/plarsen/ssi_test_schemamigration2' -p test-injection-new.yaml#step3 -d "band='g' AND tract=4430 AND skymap='DC2'" -o 'u/plarsen/ssi_test_injected_schemamigration2' --register-dataset-types -j64

Then theoretically you should be done!