AMANDA-II TWR04 System Overview

TWR logo Menu:
Goals of the TWR System
Architecture
Merging, Filtering, Archiving, and Satellite Transfer
Monitoring
Technical Specifications




The Transient Waveform Recorder system has several primary goals:

  1. Extend the integrated dynamic range by approximately a factor of 100 compared to the standard AMANDA DAQ (the muon-DAQ).
  2. Reduce the "discriminator" thresholds to 10-15mV for optical channels This improves the 1pe detection efficiency.
  3. Track baseline oscillations to reduce susceptibility to conditions that cause flary runs, or individual noisy PMTs due to inevitable drifts in baseline DC levels of the amplifiers.
  4. Develop software triggers to replace hardware DMAD majoritylogic trigger and string trigger.
  5. Combine TWR data with waveforms from initial IceCube strings to form powerful composite detector.

TWR System Architecture:
TWR04 schematic

The TWR system in 2004 consists of 75 TWR (SIS3300) distributed in 6 VME crates. Each TWR crate is connected to master crate via optical bridges. These bridge modules also contain a DSP for crate-level data processing. The DSP is controlled by FPGA codes developed by SIS and K.H. Becker. The TWR modules store waveforms from 128 consecutive triggers in local memory. When the memory is full, two things happen: (1) a signal is sent to the DAQ software  to start readout of the TWRs, and (2) the TWRs switch to a new memory bank so data readout does NOT interfere with continued data taking by the TWR.

The DSP inside the VME to VME optical bridge is used to extract features from the non-zero waveforms. Every VME crate of 14 TWRs is readout out by the DSP which then performs the feature extraction.  Once the waveform features are extracted, the data is forwarded to main crate which forwards the data to the computer  that performs the event building. 

A VME to PCI bridge between the master crate and Dell DL380 computer collects the data from the optical bridges merges the data, builds the events, and writes the data to local disk in MAPO. The data in the local file is then transfered by polechomper to BOS automatically.

The TWRs digitize continuously, but data is only written to local memory if a trigger is seen. The trigger is provided by the DMAD, set to M=18. We do not read out string and SPASE triggers. If the muon-DAQ is FIRST triggered by the string trigger (which is expected since it can form earlier), then the muon-DAQ GPS and TWR GPS will be different by several microseconds.
Therefore, the exact time of the M=16 trigger is sent to a spare TDC channel in the muon-DAQ, and the M=24/string trigger OR is sent to a spare TWR channel.  Using this information, a relative time offset between the TWR and muon DAQ can be calculated. At the time we prepared this web page, the offset times have not been determined, so the merged events from TWR-muon DAQ data that are sent over the satellite do not have a precise time correlation.  

The event times are determined by GPS clock. The GPS clock system was completely re-vamped in Jan 2003. We replaced 5 individual clocks with a GPS distribution system, developed by K. Sulanke (DESY). The signals from one GPS (time messages, 1 PPS, 10 MHz) are replicated and distributed to the new VME interface cards, developed by H. Leich (DESY) The VME interface was programmed to generate veto signals during the transmission by a VLF antenna installed about 2km from AMANDA-II. VLF transmits for 1 minute every 15 minutes at precise time intervals.

Merging, Filtering, Archiving, and Satellite Transmission:
schematic of the data flow between TWR and BOS

Raw data from the TWR DAQ is transfered to BOS where the data is first filtered to only contain high multiplicity events and a small subset of the ordinary M=18 triggers. The TWR data is filtered to reduce the amount of data transfered between computers in BOS and transfered over the satellites. The reduced data subset consists of the following type of events (M is simple majority logic). The majority logic is calculated from N_OMs that contain waveforms - defined as one or more fragments- in the TWR event).

For more detail on the TWR filtering/merging in the complete context of the AMANDA-II data handling system, please consult the very informative and well done data handler web page of Marcus Ackermann.  The TWR filtering and merging process generates about 1 GB/day (after compression). The merged data is sent to a directory that is accessed by polechompter. The data is put in queue for transmission over satellite, but its priority is low compared standard muon daq data. Polechomper also accesses the raw TWR data files and writes them to SDLT tape drive. Due to lack of tapes, which cost ~$100 each and hold about ~100 GB,  we are only making 1 copy of the TWR raw data this season. We are not archiving the merged data.

To reduce the time to merge data, the merger first selects M=120 events before attempting to merge.  The merging routines are significantly faster than last year due to several improvements.  First, the TWR filenames now contain the start time and end time so the merger programs no longer need to read through every file to access this information. Second, the filtering and merging is handled by a single program written by Jens Ahrens so data is manipulated in memory rather than through consecutive reads and writes to disk.  Third, the entire data handling operation, which includes level 1 and level 2 filtering of muon-data, TWR filtering and merging, writing data to tape, shipping files to the satellite, generating monitoring files for WO use and evaluation by personnel in the north,  etc., is distributed to all availables computers in the back of science by a newly installed queuing system.  This results in a better distribution of computer resources in the BOS.

Polechomper checks for new files on the TWR-PC disk every 10 minutes. If a new file is detected, it is transfered to BOS via 1 GB optical link between MAPO and BOS.

Monitoring: The TWR raw data structure is converted to ROOT structures in the merging process. This allows the TWR data to be piped through the normal monitoring process this season. Several important histograms are produced to help the WOs verify normal operation and identify problems.  Please see the monitoring web page and help files for further information on TWR histograms.

Technical Specifications:

Steve Barwick 
Last modified: Feb 12 2004