Abstract
The Locus Algorithm is a new technique to improve the quality of differential photometry by optimising the choices of reference stars. At the heart of this algorithm is a routine to assess how good each potential reference star is by comparing its SDSS magnitude values to those of the target star. In this way, the difference in wavelength-dependent effects of the Earth's atmospheric scattering between target and reference can be minimised. This paper sets out a new way to estimate the quality of each reference star using machine learning. A random subset of stars from SDSS with spectra was chosen. For each one, a suitable reference star, also with a spectrum, was chosen. The correlation between the two spectra in the SDSS r band (between 550 nm and 700 nm) was taken to be the gold-standard measure of how well they match up for differential photometry. The five SDSS magnitude values for each of these stars were used as predictors. A number of supervised machine learning models were constructed on a training set of the stars and were each evaluated on a testing set. The model using Support Vector Regression had the best performance of these models. It was then tested on a final, hold-out, validation set of stars to get an unbiased measure of its performance. With an R2 of 0.62, the SVR model presents enhanced performance for the Locus Algorithm technique.
Original language | English |
---|---|
Article number | 102037 |
Journal | New Astronomy |
Volume | 102 |
DOIs | |
Publication status | Published - Aug 2023 |
Keywords
- Computing
- Differential photometry
- Machine learning
- SDSS
- Stellar spectra