# Fourier-Hankel Code (D4.11)

## Introduction

Among the various techniques currently used in local helioseismology (Ring- diagram analysis: Hill (1988); Time-distance analysis: Duvall et al. (1993); Acoustic holography: Lindsey & Braun (1998)), the Fourier-Hankel spectral decomposition was one of the earliest techniques used by helioseismologists to explore the subsurface regions of the Sun. It was first employed by Braun et al. (1987), and later followed up by a series of work (Braun et al., 1988; Braun & Duvall, 1990; Braun et al., 1990, 1992) to study the interaction of solar acoustic modes (p-modes) with isolated large scale magnetic structures like sunspots. Sunspots are local discontinuities in the near surface regions of the Sun. The p-modes sample these discontinuities and leave signature of their interaction on the oscillation spectra. Whereas, the spectra of the modes that have not encountered the sunspot remains unaltered. Fourier- Hankel method can be used to separate these two wave fields. This is done by decomposing the wave field into radially inward and outward propagating waves on an annular region centered in the sunspot. Although, Hankel functions are good approximations later studies also explored this method with the help of associated Legendre functions to account for the curvature of the solar surface.

The application of Fourier-Hankel/Legendre technique was later extended to other helioseismic studies. Braun & Fan (1998) carried out meridional flow measurements using this technique by decomposing the wavefield into equatorward and poleward wavefields and measuring the frequency shifts between them. Further work carried out by Krieger et al. (2007) and Doerr et al. (2010) in this direction have resulted in the development of a fast and efficient code for the computation of Fourier-Hankel/Legendre transform which can be easily applied to a large amount of data (Glogowski, 2011). Preliminary results for meridional flow obtained from this work are comparable to the results obtained using ring- diagram analysis.

This document provides a description of the Fourier-Hankel/Legendre module on the SDO/HMI JSOC data-analysis pipeline for processing HMI data.

## Installing

Extract the archive to the local directory. Edit the Makefile with the appropriate paths to CFITSIO, HDF5 (requires hdf5-1.8.14 compiled with the –enable-parallel option), PostgreSQL, and DRMS libraries.

./configure
make

In case the libraries are not located in your default path. Configure it with the respective paths.

./configure --with-hdf5=<Path to the h5pcc executable> --with-cfitsio=<CFITSIO path> --with-drms=<DRMS Path>
make

Make produces three executables: postel_remap, tld, fld.

## Workflow

The Fourier-Legendre analysis are usually performed on small areas of the solar surface to infer the average properties beneath the selected region. For instance, to estimate the meridional flow as a function of latitude, rectangular patches are extracted from the Dopplergrams in the North-South direction along a specific longitude (usually the Stonyhurst $$\phi =0^{°}$$), centered at different latitudes. The Fourier-Hankel/Legendre decomposition gives the amplitude of the poleward and equatorward directed flow field. The difference in frequency between these two oppositely directed components can be attributed to a Doppler shift as a result of a flow in a particular direction. Assuming that the origin of this flow is be due to the meridional circulation, the frequency shifts can be inverted to get the magnitude of the meridional flow at different depths as a function of latitudes. For sunspot seismology, annular regions are selected centered in the spot and the decomposition gives the amplitude of the ingoing and outgoing waves within this annulus. The absorption coefficients and scattering phase-shifts can then be estimated from the amplitude of the two components. It should be noted that, for the meridional flow measurements the center is fixed on one of the poles, while in the case of sunspot it is the spot center. In both cases, the decomposition is carried out on a equi-angluar grid and therefore the first step is to select the region of interest and map it onto an appropriate grid. The Fourier-Hankel/Legendre decomposition is then carried on this tracked/mapped cube.

Example run
create_series kis_vigeesh.hmi_fldvtrack.jsd

./mtrack in='hmi.v_45s[2010.06.01_00:00:00_TAI/8h]' out='kis_vigeesh.hmi_fldvtrack' \
scale=0.125 lat="{0.}" lon="{200.}" map="carree" rows=400 cols=400 -n -v -o -r

show_info kis_vigeesh.hmi_fldvtrack[2010.06.01_00:00:00_TAI] -P key=MidTime,LonHG,LatHG,Duration

create_series kis_vigeesh.hmi_tldcoeff.jsd

mpirun -np 9 tld in=kis_vigeesh.hmi_fldvtrack[2010.06.01_00:00_TAI][+200][0] \
out=kis_vigeesh.hmi_tldcoeff m_abs=25 l_max=1000

show_info kis_vigeesh.hmi_tldcoeff[2010.06.01_00:00_TAI] -P key=MidTime,LonHG,LatHG,Duration,lMax,mAbs

create_series kis_vigeesh.hmi_fldcoeff.jsd

./fld in=kis_vigeesh.hmi_tldcoeff[2010.06.01_00:00_TAI][+200][0] out=kis_vigeesh.hmi_fldcoeff

show_info kis_vigeesh.hmi_fldcoeff[2010.06.01_00:00_TAI] -P key=MidTime,LonHG,LatHG,Duration,lMax,mAbs



### 1. Mapping and Tracking – mtrack

Each data set (e.g. Dopplergrams, Tracked cubes, SHT coefficients, etc.) is identified by a series in the DRMS (Data Record Management System).

#### Create a new data series

We start by creating a new data series in the DRMS that can hold the tracked cubes: $$V_z (\Theta,\Phi,t)$$. The data series is created using a series definition file, or a JSD file. A template is available under kis_vigeesh.hmi_fldvtrack.jsd.

To create a series that will hold the cube generated using mtrack:

create_series kis_vigeesh.hmi_fldvtrack.jsd
Note:

Unlike the usual pipeline data series, this definiton file does not specify the final pixel size and the time-length of the tracked data. This is set to be a variable.

#### Track the region

The next step is to track the desired region. The primary input to the module mtrack are the full-disk Dopplergrams available as the data series hmi.v_45s. We need to pass the lat and lon arguments to specify the Heliographic Latitude ($$\Theta$$) and Longitude($$\Phi$$) of the center of the region to be tracked. The scale argument sets the desired MapScale in degrees/pixel. The rows and cols are the total number of pixel-rows and columns of the final mapped region. The FLD part can only process PlateCarree map projection currently, and so the map argument has to be set to ‘carree‘. The tracked output will be stored in the data series that we just defined.

Example

To generate an untracked region (flag: -n) centered at $$(\Theta,\Phi)=(0,200)$$ sapnning 8 hours, with the line-of-sight component of observer velocity (flag: -o) and solar rotation (flag: -r) removed,

./mtrack in='hmi.v_45s[2010.06.01_00:00:00_TAI/8h]' out='kis_vigeesh.hmi_fldvtrack' \
scale=0.125 lat="{0.}" lon="{200.}" map="carree" rows=400 cols=400 -n -v -o -r

There are several ways one can call mtrack.
To generate multiple sets of tracked regions, one can specify the lat and and lon as:
lat="{0.,10.,-10.}" lon="{200.}" – maps centered at $$(\Theta,\Phi) = (-10.,200.) , (0.,200.), (+10.,200.)$$.
lat="{0.,10.,-10.}" lon="{200.,210.,195.}" – maps centered at $$(\Theta,\Phi) = (0.,200.) , (10.,210.), (-10.,195.)$$
lat="{0.,10.,20.,30.,40.}" lon="{200.,210.,195.}" – maps centered at $$(\Theta,\Phi) = (0.,200.) , (10.,210.), (20.,195.), (30.,195.), (40.,195.)$$.

Note:

The current version of the fld can only process individual tracked cubes. There is no correction/checking whether the $$\Phi$$ that is provided is out of the coverage, and hence the tracked output can be arrays of ‘nan’s.

#### Look at the series

The tracked cube is stored as fits file inside the DRMS – SUMS directory that can be viewed using the show_info command.

show_info kis_vigeesh.hmi_fldvtrack[2010.06.01_00:00:00_TAI/5m] -P key=MidTime,LonHG,LatHG,Duration

The essential keywords that will be used by the next modules are:
MidTime, LonHG, LatHG, Duration, Cadence, MapScale, MapProj, Width, Height

### 2. Legendre time series – tld

#### Create a new data series

A template is available under kis_vigeesh.hmi_tldcoeff.jsd.

create_series kis_vigeesh.hmi_tldcoeff.jsd

An important thing to note here is that, unlike the other pipeline modules, the data format is set to ‘generic’ and the output file format is ‘hdf5’.
The reasons for this are the following.

Why generic file format
1. The number or l, m are not predefined, since we need to have the flexibility in choosing these values at runtime.
2. To avoid storing multiple FITS files ($$A_{lm}, B_{lm}$$) under the same data record.
3. Writing in parallel is not yet supported for FITS file (AFAIK), which would generate again multiple files for each coefficients depending on the number of processors used under the same data record.
4. We can get away with the meat-grinder, as we now have access to hyperslabs of data.

#### Do the tld

The input series is the tracked data cube from the mtrack. The Mapscale and other quatities are automatically passed on from the input series. The user provides the maximum l and the absolute maximum m that is desired. Presently, the code can only process individual sets of ($$\Theta,\Phi$$) as follows.

mpirun -np 3 tld in=kis_vigeesh.hmi_fldvtrack[2010.06.01_00:00_TAI][+200][0] out=kis_vigeesh.hmi_tldcoeff m_abs=25 l_max=1000
The code runs as follows:
1. The master divides the whole data into chunks along the time axis, the number of chunks depend upon the number of processors assigned using the -np.
2. The master processor opens an hdf5 container file in the local directory and adds in the $$A_{lm}$$ and $$B_{lm}$$ data spaces to the /lmg group.
3. Each processor then fills in the hdf5 file at the specific location of their respective chunk with transformed complex values of $$A_{lm}$$ and $$B_{lm}$$.
4. Finally the attributes are added to the hdf5 and the file is assigned to the data series where it gets its own SUMS location.

#### Look at the series

show_info kis_vigeesh.hmi_tldcoeff[2010.06.01_00:00_TAI] -P key=MidTime,LonHG,LatHG,Duration,lMax,mAbs

The file can be viewd using hdfview command-line tool.
Apart from the other keywords that are propagated from the mtrack module,
MidTime, LonHG, LatHG, Duration, Cadence, MapScale, MapProj, Width, Height
the data series aquires two new keywords as defined below.

Table 1. Aquired Keywords
Keyword Datatype Purpose

lMax

int

Maximum l (l=0, lmax)

mAbs

int

Absolute of the maximum m (m= –mabs, mabs)

These keywords are then used by the next module without the need for specifying it at runtime.

## Analysis

If we have the file from the FLD:

/SUMS/SUM2/D76331/S00000/freq_lmg.h5
##### IDL
file = '/SUMS/SUM4/D81848/S00000/freq_lmg.h5'
file_id = H5F_OPEN(file)
dataset_Alm = H5D_OPEN(file_id,'/lmg/Alm')
dataset_Blm = H5D_OPEN(file_id,'/lmg/Blm')
alm = complex(a.real,a.imaginary)
blm = complex(b.real,b.imaginary)
##### Python
import h5py
f = h5py.File('/SUMS/SUM2/D76331/S00000/freq_lmg.h5',"r")
a=f['/lmg/Alm']
b=f['/lmg/Blm']
alm=a['real']+ 1j * a['imaginary']
blm=b['real'] + 1j * b['imaginary']

plot(alm[0,2,:]['real'])

### References

Bogdan, T. J., Brown, T. M., Lites, B. W., & Thomas, J. H. 1993, ApJ, 406, 723
Braun, D. C. & Duvall, Jr., T. L. 1990, Sol. Phys., 129, 83
Braun, D. C., Duvall, Jr., T. L., & Labonte, B. J. 1987, ApJ, 319, L27
Braun, D. C., Duvall, Jr., T. L., & Labonte, B. J. 1988, ApJ, 335, 1015
Braun, D. C., Duvall, Jr., T. L., Labonte, B. J., Jefferies, S. M., Harvey, J. W., & Pomerantz, M. A. 1992, ApJ, 391, L113
Braun, D. C. & Fan, Y. 1998, ApJ, 508, L105
Braun, D. C., Labonte, B. J., & Duvall, Jr., T. L. 1990, ApJ, 354, 372
Doerr, H.-P., Roth, M., Zaatri, A., Krieger, L., & Thompson, M. J. 2010, Astronomische Nachrichten, 331, 911
Glogowski, K. 2011, Diploma Thesis.
Krieger, L., Roth, M., & von der Lühe, O. 2007, Astronomische Nachrichten, 328, 252