This project extends the official implementation of TDA (Test-Time Adaptation) by exploring its efficiency,
robustness, and flexibility in different real-world scenarios. We focus on improving Vision-Language Models'
adaptation capabilities through cache-based dynamic adapters, investigating their performance under various
distribution shifts and budget constraints.
Our work builds upon the official paper "Efficient Test-Time Adaptation of Vision-Language Models" and
contributes novel insights through comprehensive benchmarking, hyperparameter sensitivity analysis, and
innovative enhancement strategies like the Waiting List approach.