Correction of MDI Continuum Intensity for Instrument Degradation

SOI TN 00-142
R S Bogart
2000.02.02

Introduction

Variation in the quiet-sun continuum intensity measured by MDI appears to be dominated by uniform long-term trends in loss of instrument sensitivity and certain identifiable discontinuities from various causes. Correcting the measured continuum intensity for these effects provides a dataset with long-term continuity and stability that can be used for sensitive photometric studies of the solar flux budget. In this note I describe the methods by which trends and discontinuities in instrument sensitivity are determined and the corrections applied to the SOI calibrated data products.

Data Analysis

The data analyzed are the level-0 flux-budget data, 23-minute averages of continuum intensity sampled once every 12-minutes, flat-fielded and binned 8*8 onboard, and included in the nearly continuous 5 kbps MDI telemetry stream. Level-0 data are analyzed because the Level-1 calibrated data already include a time-dependent intensity correction based on earlier analyses.

Level-0 flux-budget data are available almost continuously from the start of MDI observing under the following Data Product Codes (DPC's):
DPC Mission Days Comments
07805001 1079 - 1101 no useful data; no headers online
07815001 1105 - 1106 no useful data
07815002 1107 - 1167 no MISSVALS parameter
07825002 1168 - 1169 no MISSVALS parameter
07835002 1170 - 1186
07845002 1187 - 1215
07855002 1216 - 1539
07865002 1540 ->
So far only data from the last two DPC's, in use during the official operations period of MDI, have been analyzed. It is possible that some useful information could be extracted as far back as mission day 1107 (1996.01.13) but this would take some effort at understanding the effects of the various onboard processing procedures used to produce the DPC's involved during this part of the mission. Certainly the data reported were not useful for photometry; but the use of consistent procedures may still enable us to infer the early degradation history of the optics.

The data analysis involved is extremely simple. Data are integrated over a small collection of pixels near the center of each image. In practice I integrate over centered squares of both 4 and 16 pixels for each image; there is little difference in the statistics, so I use the 16-pixel averages because of the lower noise. A 16-pixel area extends to a maximum of 45 arc-sec from image center, assuming the canonical MDI plate scale. (Recall that a "pixel" in the flux-budget data represents the binned average of 64 camera pixels at normal resolution.) At typical aphelion, that is less than 0.047 of the apparent solar radius, so the effect of variations in differential limb darkening on the extracted area over the course of a year is quite small. The latitudinal extent of the selected region is about ± 2° from image center; consequently the region never extends beyond the heliographic latitude band ± 10° and is seldom affected by magnetic activity. Thus the region represents a good approximation to a source of uniform irradiance.

In a normal observing day there are 120 samples of the flux-budget data, so we obtain 120 values of the integrated intensity in the central boxes. After removing values from images with known problems, these values are averaged together to obtain a daily value and an estimate of the standard deviation. On `good' days, when all problem images have been accounted for, the standard deviation is about 0.001 of the average of the integrated central intensity over 16-pixel bins; it is about twice that for the averages over 4-pixel bins, consistent with true shot noise. Selecting the images so that there are enough `good' days in a time interval long enough to use as a baseline for fitting secular trends is at this time the most difficult part of the procedure.

Once sets of daily values of the averaged central intensity have been compiled, it is only necessary to perform a linear fit to the intensities as a function of time to obtain the rate of loss of instrument sensitivity and establish baseline values. In practice it is found that the trend is rather obviously piecewise continuous, with occasional discontinuities in both level and slope (see Figure 1a). Many of these discontinuities can be traced to known causes, especially changes in the Michelson tuning parameters, but not all. It is necessary to identify these discontinuities, which can be done by visual inspection, and then perform separate linear fits to the data within each interval of apparent continuity. A table of identified discontinuities in the linear trend of MDI photometric response follows:
Date Probable Cause
1996.05.04 flat field change?
1996.05.09 flat field change?
1996.05.13 flat field change
1996.11.12 flat field change
1996.11.22 flat field change
1996.11.28 flat field change
1997.03.18 tuning change
1997.08.05 tuning change
1997.11.03 focus change
1997.11.20 unknown
1998.04.01 approx unknown; slight change in slope
1998.10.30 flat field change?
1998.11.21 ?
1999.02.20? ?
1999.03.16? ?
1999.05.29? ?
1999.06.15? ?
During the latter half of 1999 the assumption of linear trends in the instrument throughput with time seems to have broken down; a 2nd-order fit in time may be required (Figure 3).

Data Quality and Rejection

There are several known problems with individual samples of the Flux Budget data that can lead to corruption of the values used for the central intensity. One obvious source is non-standard integration times. The 12-minute samples represent gaussian-weighted sums of measurements made every minute for a total of 23 minutes. Whenever the averaging program is interrupted or restarted the effective integration time may be different from normal. This will cause the absolute pixel values to differ from normal, but if it is the only effect the higher moments of the per-pixel distribution of values over the image should be unaffected. Such suspect images are identified by three methods: when the phase of the sampling in the 12-minute schedule changes, when there is a gap between image reference times of more than 12 minutes, and when the mean value of complete successive images changes noticeably with no change in the higher moments of the distribution. These correspond to quality bits 16, 17, and 18 for the affected observables (see the relevant quality flag table). Although these quality bits have not yet been tagged in the logs, an index of affected images is contained in the source code for quality checking. (Lists of images affected by non-uniform sampling are also in the on-line data notes for the affected DPC's, e.g. 07865002.) All such images have been rejected prior to calculating the daily means and variances.

Once this problem image rejection has been performed, the resulting daily averages usually exhibit a consistent floor in the standard deviation of about 15.0 data units, as noted above, with significant outliers on days when there are presumably additional uncorrected problem images (see Figure 2). The number of such days has increased dramatically at certain times, particularly during the first half of 1999, rendering the current program unfeasible and requiring additional quality checks.

Although the measured intensities are sensitive to both the instrument throughput and the integration time, the higher-moments of the per-pixel distribution function for a given image, above the variance, are not, at least to first-order. Thus, the skewness of the distribution of the continuum intensity, for example, varies quasi-periodically, with a period of about 6 months and little evident secular trend, by about 15%, between -1.8 and -2.1. Variations between consecutive good images are very small, of order one part in 10^4, and the skewness of complete images is in fact a very sensitive indicator of problems with the data, excursions of even one part in 10^3 being significant. The data are currently being examined for correlations with IP errors in order to properly set additional error flag bits.

Correction Procedure

The fit parameters for the continuum throughput trend in each time interval are stored in the file /CM/tables/calib/flat/fd/adjustments in the following format:
# Time-dependent adjustments to full-disc flat-field tables
# Entries are in the form (one entry per line)
# T1  T2  T0  a0  a1  a2 ...
1996.05.01_12:00 1997.03.18_12:00 1996.01.01_00:00 1.0 0.0 1.00317 -1.312e-9
1997.03.18_12:00 1997.11.03_12:00 1996.01.01_00:00 1.0 0.0 1.02003 -0.800e-9
1997.11.03_12:00 1997.11.20_12:00 1996.01.01_00:00 1.0 0.0 1.02141 -0.809e-9
1997.11.20_12:00 2000.12.31_12:00 1996.01.01_00:00 1.0 0.0 1.01383 -0.947e-9
This table (or any similarly formatted table) is inspected by mdical if instructed by the run parameter tvartabl, which is expected to have the filename of the table to use if it is present in the calling parameter list (see the man page for mdical). The procedure is that for Observation times between T1 and T2 the per-pixel calibration parameters a_i are adjusted by multiplication by the value v_i0 / {1.0 + v_i1 * [t - T0]} before being applied. v_10 and v_11 are of course just the best-fit parameters a and b/a respectively in the regression of the data on the model
I(t) = a + b(t - T0).
In the example above, for which adjustments are provided only up through 1st order (appropriate for the flat fielding calibration of continuum data which is only linear), between 5/1/1996 and 3/18/1997 the offset (dark-current) terms would be unmodified (multiplied by 1.0 / {1.0 + 0.0 * [t - T0]}), while the gain terms would be multiplied by 1.00317 / {1.0 + 1.312e-9 t}, where t is the time difference (in seconds) between the observation time and 1996.01.01.

Note that the gain table adjustment can be made even for such data products as the Flux-Budget and LOI Continuum intensities, which are flat-fielded on board. That is because there are a series of correction tables for the various onboard flat fields, and these can be renormalized. Even when we believe the onboard flat-fielded data to not require correction the level 0 data are still multipled by a gain table which is all 1's, and which can be and is normalized by the above adjustment.

It is an operational question which level-0 data sets the time-dependent normalization is to be applied to when mdical is run. The appropriate observables are any involving continuuum intensity taken in the full-disc resolution mode corresponding to any of the following DPC's:

In practice there seems little reason to normalize the limb data, so it is only the full-disc continuum photograms and the Structure Program continuum products 07* that are normalized.

References

Figures

1a
Daily averages of the integrated signal in the central 16 pixels of the level-0 flux budget data for the year 1997. No filtering of the input data was performed except for rejection of images involving a phase change in the 12-minute sampling or for which the integration time was abnormal.
1b
Same as Fig 1a., but with all daily points for which the daily standard deviation exceeds 25.0 suppressed.
2
Daily values of the standard deviation of the integrated signal in the central 16 pixels of the level-0 flux budget data for each of the years 1996, 1997, 1998, and 1999.
3
Same as Fig 1a., for the year 1999.