Setting up a MaxFilter script

This page will help you set up your own automatic MaxFilter pipeline

Why?

There are two main reasons why you would want an automatic procedure

  1. That way you are sure that the same processing is applied to all your subjects (with many subjects, misclicks are likely to occur)
  2. If, maybe after having clicked through your billion files, you realize that you want to make a change, it is nice to do so automatically

Who?

You can do it yourself by using the template that NatMEG offers

How?

When you have run through the setup steps below, you will be able to maxfilter all your files with a single command, which you will be find at the end of this tutorial

Follow these simple steps: (be sure to run the commands exactly as entered)

  1. Open a terminal on DACQ (the computer on which you normally find the MaxFilter GUI; there’s an icon on the Desktop)
  2. Go to the “data_scripts/personal_maxfilter_scripts” folder
cd data_scripts/personal_maxfilter_scripts
  1. Create a maxfilter script of your own (exchange “your_name” with preferably the name of your project, e.g. “20001_maxfiltering”)
touch your_name.sh
  1. Copy contents of the master template maxfiltering file into your file
cat ../maxfilter_master.sh | tee ./your_name.sh
  1. Open your file for editing
    5a. Open your file with gedit (opens a new window)
    5b. Open your file with vim (stays inside terminal, but is less straightforward to use compared to gedit)
gedit your_name.sh ## 5a
vim your_name.sh ## 5b
  1. Edit your file

    Here the relevant lines (10-18) are shown:
    Change only what is on the right hand side of the equal sign (=)
    Default settings are set (except for project of course), but they may be changed according to researcher wishes
    NB! MAKE SURE THAT THERE IS A SPACE BETWEEN VARIABLES AND COMMENT SIGNS (#), E.G. correlation=0.98 ## some comment NOT E.G. correlation=0.98#some comment

project=your_project_name_here    ## the name of your project in /neuro/data/sinuhe
correlation=0.98
autobad=on # on/off
tsss_default=on # on/off (if off does Signal Space Separation, if on does temporal Signal Space Separation)
cal=/neuro/databases/sss/sss_cal.dat
ctc=/neuro/databases/ctc/ct_sparse.fif
movecomp_default=on # on/off
trans=off # on/off
transformation_to=default ## default is "default", but you can supply your own file 
empty_room_files=( 'empty_room.fif' 'also_empty_room.fif' 'etc.' ) ## put the names of your empty room files (consistent naming makes it a lot easier) (files in this array won't have "movecomp" applied) (no commas between files and leave spaces between first and last brackets)
headpos=off # on/off ## if "on", no movement compensation (movecomp is automatically turned off, even if specified "on")
force=off # on/off, "forces" the command to ignore warnings and errors and OVERWRITES if a file already exists with that name
downsampling=off # on/off, downsamples the data with the factor below
downsampling_factor=4 # must be an INTEGER greater than 1, if "downsampling = on". If "downsampling = off", this argument is ignored
sss_files=( 'only_apply_sss_to_this_file.fif' 'resting_state.fif' ) ## put the names of files you only want SSS on (can be used if want SSS on a subset of files, but tSSS on the rest)
apply_linefreq=off ## on/off
linefreq_Hz=50 ## set your own line freq filtering (ignored if above is off)
  1. Make your file executable (means that the commands inside it can be run within the terminal)
chmod u+x your_name.sh
  1. Run your script! (If you have done correctly, all your files should be MaxFiltered as you have specified, and they will be found in your project under the accoding subjects and recording dates)
./your_name.sh
  1. To process all unprocessed files in the future, you can simply open a terminal from the desktop and run:
./data_scripts/personal_maxfilter_scripts/your_name.sh