Author Archives: Daniel Coderre

The XENON1T Data Acquisition System

Featuring several kilometers of cables, dozens of analog electronics modules, crates of purpose-built specialty computers, and backed by a small server farm, the XENON1T data acquisition system (DAQ) was designed to put our data onto disks. The XENON Collaboration recently published a technical paper on our DAQ in JINST, of course also available on arXiv.

The XENON1T detector measures light, which creates analog electrical signals in 248 independent photo-sensors. The DAQ is responsible for converting these analog signals to a digital, storage-ready format, deciding what types of aggregate signal indicate the presence of a physical interaction in the detector, and recording all the interesting data onto disk for later storage and analysis.

A photo of the XENON1T DAQ room, deep underground at the Gran Sasso lab. Pictured left to right: the DAQ server rack, (red) digitizers (amplifiers facing backwards), cathode high voltage supply, muon veto DAQ, slow control server rack.

There are a couple novel aspects of this system. The first is that the data is streamed constantly from the readout electronics onto short-term storage, recording all signals above a single photo-electron with high (>93%) efficiency. This is different from a conventional data acquisition system, which usually would require certain hardware conditions to be met to induce acquisition,  also called a trigger. We defer our trigger to the software stage, giving us a very low energy threshold.

The software trigger itself was implemented as a database query, which is another novel aspect of the system. Pre-trigger data was stored in a MongoDB NoSQL database and the trigger logic scanned the database looking for signals consistent with S1’s (light) and S2’s (charge). If the algorithm found a matching signal, it would retrieve all the nearby data from the database and write it to storage. Because of the speed of NoSQL databases, this worked the same in both dark matter search mode, where we record just a few counts per second, and calibration modes, where we could record hundreds of counts per second.

To complete the high-tech upgrade of our system, we also ran the user interface as a web service. This means the system could be controlled from laptops, smartphones, or tablets anywhere with a 4G connection, contributing to the high uptime of the detector.

The DAQ is currently being updated to double its capacity to read out the XENONnT detector, so stay tuned.

XENON1T presented at Rencontres de Moriond Electroweak

Last week I had the opportunity to present the XENON1T experiment at the Recontres de Moriond electroweak conference in La Thuile Italy in the beautiful Aosta Valley. This meeting is one of the most important meetings for LHC physics, but has slowly expanded to encapsulate a variety of topics, including the hunt for dark matter. The conference program and slides are available on indico. The XENON1T presentation focused on our dark matter search results from last spring as well as the upcoming result using about a factor of 10 more exposure, which is under intense preparation for release. The whole presentation is available from the indico page but here is one slide from it:

Here we discuss how we were able to increase the amount of liquid xenon we use for our dark matter search from ~1000kg to ~1300kg. The top left plot shows an example larger search volume (red) compared to the smaller volume used for the first result. But it’s not so simple as just adding volume. While our inner detector is completely free of WIMP-like background, the outer radii contain background components that can mimic WIMPs. This is illustrated in the bottom right plot where the background-free inner volume (right) is contrasted with the full search volume containing the outer radial sections (left). The full volume has a contribution from PTFE (Teflon) surface background (green contour and points) that is absent as soon as we consider only the inner volume.

Our statistical interpretation has been updated so it is smart enough to take this into account. We parameterize our entire search region in both radial and spatial dimensions with expected signal and background distributions described at each location. This allows us to fully exploit the sensitivity of our innermost background-free volumes while also gaining a modest improvement from the outermost ones.