Federated Continual Learning
This project explores Federated Continual Learning (FCL), a setting where multiple clients collaboratively train a central global model as new tasks arrive over time. A major challenge in FCL is catastrophic forgetting, where learning new tasks causes performance on earlier tasks to degrade.
Our research introduces a nonparametric Bayesian method that dynamically adjusts the neural plasticity of the model as tasks evolve. By leveraging the Indian Buffet Process (IBP) prior, we determine which network weights remain active for each task. To combat forgetting, we develop a regularization strategy that penalizes deviations from all previously learned task weights while remaining memory efficient through an exponential family memoization scheme, eliminating the need to store past models.

Cross-domain Image Processing and Synthesis
Training AI systems requires increasingly large datasets for improved performance. While visible light images are abundant, sonar, radar, and infrared (IR) images are much scarcer, necessitating innovative synthetic approaches to dataset generation in these domains.
We build upon GeNVS, a method combining Neural Radiance Fields (NeRFs) and diffusion models, to generate plausible novel views from sparse input data. By adapting GeNVS with domain-specific embeddings and physics-based models, we can generate physically accurate synthetic views beyond the visible light spectrum. Our goal is to enable seamless conversion between different sensor modalities, enhancing data availability across different domains.

SAR Simulator
We are developing a synthetic aperture radar (SAR) simulator that operates directly on 3D object models in the standard .obj format. By using ray tracing, the simulator accurately accumulates both range and energy information from the scene, producing realistic radar returns. These raw returns are then processed with signal processing techniques that enforce a chosen sample rate and bandwidth, allowing us to closely model the properties of real radar systems. The resulting data can then be passed through standard SAR reconstruction algorithms to generate high-fidelity SAR images from virtual scenes.
Beyond basic imaging, the simulator also provides a platform for testing radar performance in challenging scenarios. We can introduce controlled noise, interference, or even electronic jamming effects into the signal, simulating conditions that occur in real-world long-range radar operations. This flexibility makes the simulator a valuable tool for exploring both the capabilities and limitations of SAR in complex, adversarial, and degraded environments.



Enhancing SAR Automatic Target Recognition with Simulated Data
This project investigates how high-quality simulated Synthetic Aperture Radar (SAR) data can improve the robustness of Automatic Target Recognition (ATR) systems. Modern deep learning excels in computer vision but is heavily biased toward electro-optical (RGB) data and large, readily available datasets. SAR systems, by contrast, suffer from limited real data, which hinders the development of models that perform well in real-world conditions.
We aim to develop low-cost, high-fidelity SAR data generation methods using modern generative models (e.g., GANs, diffusion models) to supplement scarce real data. By integrating simulated data into training, we will evaluate whether it can boost performance for existing SAR modelspaving the way for more reliable SAR ATR systems in dynamic operational environments.

Lightweight Spatio-Temporal Anomaly Detection for Infrared Drone Surveillance
Detecting small drones in infrared (IR) video sequences presents significant challenges due to their tiny spatial footprint, low contrast, and the presence of complex and cluttered backgrounds. These conditions often result in elevated false alarm rates. To address this, we are exploring a novel approach that frames drone detection as an anomaly detection problem, focusing on identifying deviations from the background model rather than relying on conventional object detection pipelines.
Our proposed framework combines the classical Reed-Xiaoli (RX) anomaly detector with a lightweight convolutional neural network (CNN) for efficient classification. The goal is to design a highly computationally efficient detection pipeline that can operate in real-time on resource-constrained platforms such as embedded systems or edge devices.
Inspired by TCRNet, we plan to extract 3D spatio-temporal chips that integrate spatial and temporal context, improving the model’s ability to perceive small moving targets in dynamic scenes. These chips will be processed using a low-complexity CNN trained to differentiate true drone targets from background clutter. The RX detector will serve as a region proposal mechanism in the spatio-temporal domain, highlighting potential anomalies that are then classified by the CNN.
By combining traditional statistical detection with minimal deep learning, we aim to achieve a practical balance between detection performance and computational overhead, enabling real-time deployment without sacrificing accuracy.

Cross-Architecture Knowledge Consolidation Under Data Constraints
Modern vision systems are built by different teams on evolving datasets, often under privacy and bandwidth constraints. Retraining from scratch is expensive, and naïve fine-tuning triggers catastrophic forgetting. Our lab develops principled methods to consolidate knowledge across models and update them over time—even when the original data are unavailable. We focus on three directions: (1) Multi-teacher distillation to merge expertise from heterogeneous architectures, disjoint or shifting label spaces, and different model versions while resolving conflicts and class mismatches; (2) Data-free distillation that synthesizes training images directly from teacher models (via logits, intermediate statistics, or decision boundaries) to operate when data cannot be shared; and (3) Continual Knowledge Distillation (KD) without rehearsal, alternating or interleaving multi-teacher KD with synthetic data generation so students absorb new knowledge while preserving old.
These efforts yield compact, deployable students that retain past performance while integrating new classes or domains, using model-centric training that requires only teacher access. We provide reproducible protocols for teacher aggregation under class mismatch, data-free synthesis tailored for KD, and continual KD with measurable forgetting/retention trade-offs. Applications span edge AI, regulated domains, and collaborative settings where data cannot move; we release open code, benchmarks, and pretrained artifacts to support community adoption and extension.

