Tag Archives: DAQ

The XENON1T Data Acquisition System

Featuring several kilometers of cables, dozens of analog electronics modules, crates of purpose-built specialty computers, and backed by a small server farm, the XENON1T data acquisition system (DAQ) was designed to put our data onto disks. The XENON Collaboration recently published a technical paper on our DAQ in JINST, of course also available on arXiv.

The XENON1T detector measures light, which creates analog electrical signals in 248 independent photo-sensors. The DAQ is responsible for converting these analog signals to a digital, storage-ready format, deciding what types of aggregate signal indicate the presence of a physical interaction in the detector, and recording all the interesting data onto disk for later storage and analysis.

A photo of the XENON1T DAQ room, deep underground at the Gran Sasso lab. Pictured left to right: the DAQ server rack, (red) digitizers (amplifiers facing backwards), cathode high voltage supply, muon veto DAQ, slow control server rack.

There are a couple novel aspects of this system. The first is that the data is streamed constantly from the readout electronics onto short-term storage, recording all signals above a single photo-electron with high (>93%) efficiency. This is different from a conventional data acquisition system, which usually would require certain hardware conditions to be met to induce acquisition,  also called a trigger. We defer our trigger to the software stage, giving us a very low energy threshold.

The software trigger itself was implemented as a database query, which is another novel aspect of the system. Pre-trigger data was stored in a MongoDB NoSQL database and the trigger logic scanned the database looking for signals consistent with S1’s (light) and S2’s (charge). If the algorithm found a matching signal, it would retrieve all the nearby data from the database and write it to storage. Because of the speed of NoSQL databases, this worked the same in both dark matter search mode, where we record just a few counts per second, and calibration modes, where we could record hundreds of counts per second.

To complete the high-tech upgrade of our system, we also ran the user interface as a web service. This means the system could be controlled from laptops, smartphones, or tablets anywhere with a 4G connection, contributing to the high uptime of the detector.

The DAQ is currently being updated to double its capacity to read out the XENONnT detector, so stay tuned.

Outline the XENONnT Computing Scheme at the 2nd Rucio Community Workshop in Oslo

Oslo welcomed all 66 participants of the second Rucio Community Workshop with pleasant weather and a venue which offered an astonishing view about the capital of Norway.
The opensource and contribution model of the Rucio data management tool captures more and more attention from numerous fields. Therefore, 21 communities reported this year about the implementation of Rucio in their current data workflows, discussed with the Rucio developing team possible improvements and chatted among each other during the coffee breaks to learn from others experiences. Among the various communities were presentations given by the DUNE experiment, Belle-2 and LSST. The XENON Dark Matter Collaboration presented the computing scheme of the upcoming XENONnT experiment. Two keynote talks from Richard Hughes-Jones (University of Maryland) and Gundmund Høst (NeIC) highlighted the concepts of the upcoming generation of academic networks and the Nordic e-Infrastructure Collaboration.

After the successful XENON1T stage with two major science runs, a world-leading limit for spin-indepenent Dark Matter interactions with nucleons and further publications, the XENON1T experiment stopped data taking in December 2018. We aim for two major updates for the successor stage of XENONnT: a larger time projection chamber (TPC) which holds ~8,000 kg of liquid xenon with 496 PMTs for signal readout and an additional neutron veto detector based on Gadolinium doped water in our water tank. That requires upgrades in our current data management and processing scheme, which was presented last year at the first Rucio Community Workshop. Fundamental change is the new data processor STRAX which allows us much faster data processing. Based on the recorded raw data, the final data product will be available at distinct intermediate processing stages which depend on each other. Therefore, we stop using our “classical” data scheme of raw data, processed data and minitrees, and instead aim for a more flexible data structure. Nevertheless, all stages of the data are distributed with Rucio to connected grid computing facilities. STRAX will be able to process data from the TPC, the MuonVeto and the NeutronVeto together to allow coincident analysis.

The data flow of the XENONnT experiment

The data flow of the XENONnT experiment. A first set data is processed already at the LNGS. All data kinds are distributed with Rucio to the analysts.

Reprocessing campaigns are planed ahead with HTCondor and DAGMan jobs at EGI and OSG similar to the setup of XENON1T. Due to the faster data processor, it becomes necessary to outline a well-established read and write routine with Rucio to guarantee quick data access.
Another major update in the XENONnT computing scheme becomes the tape backup location. Because of the increased number of disks and tape allocations in the Rucio catalogue, we will abandon the Rucio independent tape backup in Stockholm and use dedicated Rucio storage elements for storing the raw data. The XENON1T experiment collected ~780 TB of (raw) data during its life time which are all managed by Rucio. The XENON Collaboration is looking forward to continuing this success story with XENONnT