Publications

Revisiting Test Time Adaptation under Online Evaluation
Revisiting Test Time Adaptation under Online Evaluation

This paper proposes a novel online evaluation protocol for Test Time Adaptation (TTA) methods, which penalizes slower methods by providing them with fewer samples for adaptation. TTA methods leverage unlabeled data at test time to adapt to distribution shifts. Though many effective methods have been proposed, their impressive performance usually comes at the cost of significantly increased computation budgets. Current evaluation protocols overlook the effect of this extra computation cost, affecting their real-world applicability. To address this issue, we propose a more realistic evaluation protocol for TTA methods, where data is received in an online fashion from a constant-speed data stream, thereby accounting for the method’s adaptation speed. We apply our proposed protocol to benchmark several TTA methods on multiple datasets and scenarios. Extensive experiments shows that, when accounting for inference speed, simple and fast approaches can outperform more sophisticated but slower methods. For example, SHOT from 2020 outperforms the state-of-the-art method SAR from 2023 under our online setting. Our online evaluation protocol emphasizes the need for developing TTA methods that are efficient and applicable in realistic settings.

Ego4d: Around the world in 3,000 hours of egocentric video
Ego4d: Around the world in 3,000 hours of egocentric video

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite. It offers 3,670 hours of daily-life activity video spanning hundreds of scenarios (household, outdoor, workplace, leisure, etc.) captured by 931 unique camera wearers from 74 worldwide locations and 9 different countries. The approach to collection is designed to uphold rigorous privacy and ethics standards, with consenting participants and robust de-identification procedures where relevant. Ego4D dramatically expands the volume of diverse egocentric video footage publicly available to the research community. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. Furthermore, we present a host of new benchmark challenges centered around understanding the first-person visual experience in the past (querying an episodic memory), present (analyzing hand-object manipulation, audio-visual conversation, and social interactions), and future (forecasting activities). By publicly sharing this massive annotated dataset and benchmark suite, we aim to push the frontier of first-person perception.

SeedQuant: a deep learning-based tool for assessing stimulant and inhibitor activity on root parasitic seeds
SeedQuant: a deep learning-based tool for assessing stimulant and inhibitor activity on root parasitic seeds

Witchweeds and broomrapes are root parasitic weeds that represent one of the main threats to global food security. By drastically reducing host crops’ yield, the parasites are often responsible for enormous economic losses estimated in billions of dollars annually. Parasitic plants rely on a chemical cue in the rhizosphere, indicating the presence of a host plant in proximity. Using this host dependency, research in parasitic plants focuses on understanding the necessary triggers for parasitic seeds germination, to either reduce their germination in presence of crops or provoke germination without hosts (i.e. suicidal germination). For this purpose, a number of synthetic analogs and inhibitors have been developed and their biological activities studied on parasitic plants around the world using various protocols. Current studies are using germination-based bioassays, where pre-conditioned parasitic seeds are placed in the presence of a chemical or plant root exudates, from which the germination ratio is assessed. Although these protocols are very sensitive at the chemical level, the germination rate recording is time consuming, represents a challenging task for researchers, and could easily be sped up leveraging automated seeds detection algorithms. In order to accelerate such protocols, we propose an automatic seed censing tool using computer vision latest development. We use a deep learning approach for object detection with the algorithm Faster R-CNN to count and discriminate germinated from non-germinated seeds. Our method has shown an accuracy of 95% in counting seeds on completely new images, and reduces the counting time by a significant margin, from 5 min to a fraction of second per image. We believe our proposed software 5 “SeedQuant” will be of great help for lab bioassays to perform large scale chemicals screening for parasitic seeds applications.