obiwan.dplearn.create_training

saves 64x64 pixel cutouts of each source in a Data Release as HDF5 files

Classes

SimStamps([ls_dir, outdir, savedir, jpeg]) Object for exracting sim cutouts
TractorStamps([ls_dir, outdir, savedir, jpeg])
UserDefinedStamps([ls_dir, outdir, savedir, …])

Functions

flux2mag(nmgy)
get_ELG_box(rz, gr[, pad])
param rz:r-z
get_xy_pad(slope, pad) Returns dx,dy
mpi_main([nproc, which, outdir, ls_dir, …])
param nproc:> 1 for mpi4py
testcase_main()
y1_line(rz[, pad])
y2_line(rz[, pad])
class obiwan.dplearn.create_training.SimStamps(ls_dir=None, outdir=None, savedir=None, jpeg=False)[source]

Object for exracting sim cutouts

Parameters:
  • ls_dir – LEGACY_SURVEY_DIR, like ‘tests/end_to_end/testcase_DR5_grz’
  • outdir – path to dir containing obiwan,coadd,tractor dirs
extract(hw=32)[source]

For each id,x,y in self.cat, extracts image cutout

Parameters:hw – half-width, pixels, (hw*2) x (hw*2) image cutout
load_data(brick, cat_fn, coadd_dir)[source]

loads coadd and catalogue data

Parameters:
  • brick
  • coadd_dir – path/to/rs0, rs300, rs300_skipid, etc
run(brick, stampSize=64, applyCuts=True, zoom=None)[source]

Write the hdf5 image files for all rs/* in this brick

Parameters:
  • brick – brickname
  • stampSize – height and width in pixes of training image
  • zoom – if legacypipe was run with zoom option
set_paths_to_data(brick)[source]

lists of catalogues filenames and coadd dirs

class obiwan.dplearn.create_training.TractorStamps(ls_dir=None, outdir=None, savedir=None, jpeg=False)[source]
extract(hw=32)

For each id,x,y in self.cat, extracts image cutout

Parameters:hw – half-width, pixels, (hw*2) x (hw*2) image cutout
isFaint_cut(df)[source]

There are only faint sources in the deep2 matched sample, but in the tractor catalogus have a bright population presumably stars. Remove these

Parameters:df – pd.DataFrame have tractor cat extinction corrected grz mags
load_data(brick, cat_fn, coadd_dir)

loads coadd and catalogue data

Parameters:
  • brick
  • coadd_dir – path/to/rs0, rs300, rs300_skipid, etc
run(brick, stampSize=64, applyCuts=True, zoom=None)

Write the hdf5 image files for all rs/* in this brick

Parameters:
  • brick – brickname
  • stampSize – height and width in pixes of training image
  • zoom – if legacypipe was run with zoom option
set_paths_to_data(brick)[source]

lists of catalogues filenames and coadd dirs

sim_sampling_cut(df)[source]

same cut applied to simulated sources

Parameters:df – pd.DataFrame have tractor cat extinction corrected grz mags
class obiwan.dplearn.create_training.UserDefinedStamps(ls_dir=None, outdir=None, savedir=None, jpeg=False)[source]
extract(hw=32)

For each id,x,y in self.cat, extracts image cutout

Parameters:hw – half-width, pixels, (hw*2) x (hw*2) image cutout
load_data(brick, cat_fn, coadd_dir)

loads coadd and catalogue data

Parameters:
  • brick
  • coadd_dir – path/to/rs0, rs300, rs300_skipid, etc
run(brick, stampSize=64, applyCuts=True, zoom=None)

Write the hdf5 image files for all rs/* in this brick

Parameters:
  • brick – brickname
  • stampSize – height and width in pixes of training image
  • zoom – if legacypipe was run with zoom option
set_paths_to_data(brick)[source]

lists of catalogues filenames and coadd dirs

obiwan.dplearn.create_training.get_ELG_box(rz, gr, pad=None)[source]
Parameters:
  • rz – r-z
  • gr – g-r
  • pad – magnitudes of padding to expand TS box
obiwan.dplearn.create_training.get_xy_pad(slope, pad)[source]

Returns dx,dy

obiwan.dplearn.create_training.mpi_main(nproc=1, which=None, outdir=None, ls_dir=None, savedir=None, jpeg=False, bricks=[])[source]
Parameters:
  • nproc – > 1 for mpi4py
  • which – one of [‘tractor’,’sim’,’userDefined’]
  • outdir – path to coadd,tractor dirs
  • ls_dir – not needed if legacy_survey_dir env var already set
  • savedir – where to write the hdf5 files, outdir if None
  • jpeg – extract .jpg instead of .fits
  • bricks – list bricks to make hdf5 cutouts from