Overview
My research is fueled by a dual passion: customizing state-of-the-art AI and ML techniques to solve critical challenges in power system operations, and drawing inspiration from the unique physics of real-world energy systems to advance AI algorithms. By tailoring methods like reinforcement learning, stochastic optimization, and privacy-preserving analytics, I aim to improve grid reliability, efficiency, and sustainability under the uncertainties of renewable integration.
Simultaneously, I seek to push the boundaries of AI itself. For instance, the modularity inherent in energy storage dynamics inspired my recent work on approximate factorization for reinforcement learning, breaking the curse of dimensionality. Whether it is rethinking predictive models, balancing market incentives, or managing energy storage, my research thrives at the nexus of theory and practicality—turning uncertainty into an engine for innovation.
#1: Privacy-Preserving Smart Meter Analytics
The Problem
Smart meters provide high-resolution data crucial for grid management, but they also expose highly sensitive user behavior and lifestyle habits. As smart grid penetration grows, a critical challenge emerges: How can utilities harness data-driven insights without compromising consumer trust or privacy?
Our Core Idea
At the center of this research line is a privacy-preserving framework utilizing differential privacy to obscure sensitive details while retaining essential statistical properties. This enables utilities to perform critical tasks like forecasting and grid optimization with minimal performance loss, fundamentally breaking the zero-sum trade-off between privacy and data utility.
The Controversy
A prevailing assumption is that privacy mechanisms inherently degrade data utility. Skeptics argued that introducing noise compromises critical grid operations, and debated whether mathematical privacy guarantees could suffice against real-world adversarial threats without destroying system efficiency.
The Truth
Through rigorous analysis, we demonstrated that carefully calibrated differential privacy can provide robust guarantees while keeping forecasting errors within acceptable thresholds. Furthermore, we designed a valid information ratio-based pricing scheme for noisy data that is independent of downstream tasks, revealing the true economic value of privacy-protected data.
Broad Exploration
Beyond the foundational framework, I have explored user-centric designs in Non-Intrusive Load Monitoring (NILM), leveraging network flow-based optimization to offer customizable privacy solutions. We have also enhanced privacy for time-series data without sacrificing its statistical structure and developed differentially private generative models (DPWGAN) for synthesizing high-quality load datasets.
[1] C. Lu, et al., "Privacy Preserving User Energy Consumption Profiling: From Theory to Application," IEEE Transactions on Smart Grid, 2024.
[2] J. Zhang, C. Lu, et al., "User-perceptional privacy protection in NILM," Applied Energy, 2025.
[3] H. Wang, C. Wu, "Privacy Preservation for Time Series Data in the Electricity Sector," IEEE TSG, 2023.
[4] J. Huang, et al., "DPWGAN: High-Quality Load Profiles Synthesis," IEEE TSG, 2023.
[5] H. Wang, et al., "Privacy Preserving in NILM: A Differential Privacy Perspective," IEEE TSG, 2021.
#2: Statistically Feasible Power System Operation
The Problem
Renewable integration introduces massive uncertainty into power system operations, particularly the Unit Commitment (UC) problem. Traditional robust optimization is often excessively conservative and expensive, while obtaining accurate probability distributions for chance constraints is practically impossible. How can operators ensure reliability and economic efficiency without relying on unattainable distributional knowledge?
Our Core Idea
We introduced a "statistically feasible" robust optimization framework that constructs uncertainty sets directly from observed samples. To overcome the computational bottleneck of mixed-integer operations, we integrated a learning-to-optimize (L2O) acceleration methodology that transforms complex combinatorial problems into tractable linear programs, drastically cutting solution times.
The Controversy
Prevailing wisdom assumes that handling joint chance constraints without distribution knowledge forces a compromise on feasibility or produces overly conservative schedules. Furthermore, there was skepticism regarding whether machine learning-based acceleration could maintain solution quality and guarantee feasibility in the highly complex, non-convex space of UC.
The Truth
Our experiments proved that robust UC can achieve strong performance guarantees using only real-world data samples. Our two-phase adaptive uncertainty set reduces conservatism while provably meeting joint chance constraints. Additionally, our L2O acceleration achieves >95% accuracy in mapping system conditions to unit status, cutting solution times by up to an order of magnitude without compromising reliability.
Broad Exploration
Building on statistical feasibility, our innovations span various operational scales: from robust scheduling of small-scale Thermostatically Controlled Loads (TCL) to large-scale Sample-Adaptive Robust Economic Dispatch (ED). Collectively, these represent a paradigm shift in practical, data-driven robust power operations.
[1] J. Liang, W. Jiang, C. Lu, C. Wu, "Joint Chance-Constrained Unit Commitment: Statistically Feasible Robust Optimization with Learning-to-Optimize Acceleration," IEEE Transactions on Power Systems, 2024.
[2] W. Jiang, C. Lu, C. Wu, "Robust Scheduling of TCLs with Statistically Feasible Guarantees," IEEE TSG, 2023.
[3] C. Lu, et al., "Sample-Adaptive Robust Economic Dispatch With Statistically Feasible Guarantees," IEEE TPWRS, 2024.
#3: Innovative Electricity Market Design
The Problem
Traditional centralized pricing schemes fail to incentivize local energy trading or reflect local power losses in modern distribution networks rich in distributed energy resources (DERs). How can we enable efficient, fair, and scalable local energy markets by fully leveraging advanced power electronic devices?
Our Core Idea
We developed a comprehensive transactive energy market (TEM) based on a cooperative Stackelberg game, transitioning the DSO into an active market facilitator. By exploiting the flexible control of power electronics, we simplified power flow models to compute distributed locational marginal pricing (LMP) efficiently, ensuring budget balance and individual rationality.
The Controversy
Conventional wisdom argues that distribution-level markets are computationally intractable due to non-convex power flows, and that local trading disrupts budget balance. Critics often claim that distributed LMPs are too complex in practice for radial networks with heterogeneous participants.
The Truth
We showed that power-electronics-enabled TEMs are highly effective and computationally tractable. Our loss allocation mechanism guarantees budget balance for the DSO while improving participants' payoffs. Quantitatively, this structure reduces system-wide losses, incentivizes renewable integration, and alleviates local congestion.
Broad Exploration
Our group extensively explores market mechanisms to address uncertainties and information asymmetries. This includes electricity storage sharing, privacy-preserving procurement, EV virtual power plants, manipulation-proof virtual bidding, and carbon-efficient pricing, blending game theory, optimization, and statistical learning.
[1] N. Gu, J. Cui, C. Wu, "Power-Electronics-Enabled Transactive Energy Market Design for Distribution Networks," IEEE TSG, 2022.
[2] W. Jiang, et al., "Sample-Oriented Electricity Storage Sharing Mechanism Design," IEEE TSG, 2024.
[3] C. Lu, et al., "Manipulation-Proof Virtual Bidding Mechanism Design," IEEE TEMPR, 2024.
[4] C. Lu, et al., "Deadline Differentiated Dynamic EV Charging Price Menu Design," IEEE TSG, 2023.
(And various other works on EV integration, forecasting competition, and carbon efficiency.)
#4: Learning for Effective Power System Operation
The Problem
Accurate probabilistic forecasting is vital for grid operators, but load profiles on critical days (like weekends and holidays) diverge heavily from weekdays. Traditional deep learning models struggle to produce reliable uncertainty intervals for these irregular days, especially when historical data is extremely limited.
Our Core Idea
We designed a suite of customized deep learning frameworks capable of achieving high accuracy with minimal data. By combining signal decomposition techniques to extract trend/peak patterns with specialized loss functions (e.g., Quantile Regression Loss), we created models that prioritize critical operational days and directly align with grid management goals.
The Controversy
It is widely believed that deep learning models cannot achieve reliable probabilistic predictions without massive historical datasets. Critics assumed that artificial data augmentation might cause overfitting, and doubted whether ML models could genuinely compensate for missing data diversity without sacrificing tractability.
The Truth
Our frameworks break the data-requirement barrier. Tested on real-world power grids, even with limited historical data, our methods produce high-quality probabilistic forecasts, significantly outperforming standard baselines (LSTM, CNN, Transformer) in both interval coverage and sharpness.
Broad Exploration
We strongly advocate shifting from traditional accuracy-centric ML to multi-task, end-to-end frameworks that directly optimize for operational and economic objectives. Our EDformer jointly minimizes dispatch cost and forecasting error, while MLP-Carbon provides probabilistic forecasts for carbon markets, yielding demonstrable reductions in real-world economic suboptimality.
[1] Z. Tian, W. Liu, W. Jiang, C. Wu, "CNNs-Transformer based day-ahead probabilistic load forecasting for weekends with limited data availability," Energy, 2024.
[2] Z. Tian, et al., "EDformer family: End-to-end multi-task load forecasting frameworks for day-ahead economic dispatch," Applied Energy, 2025.
[3] Z. Tian, et al., "MLP-Carbon: A new paradigm... for accurate carbon price forecasting," Applied Energy, 2025.
[4] C. Lu, et al., "Effective end-to-end learning framework for economic dispatch," IEEE TNSE, 2022.
#5: Sample Efficient RL via Approximate Factorization
The Problem
Reinforcement learning (RL) suffers from the "curse of dimensionality" in large-scale sequential decision-making. While Factored MDPs attempt to solve this, they require strict, perfect factorization, which rarely exists in real-world energy systems. Can we break this curse when the system cannot be perfectly decomposed?
Our Core Idea
Inspired by the physical modularity of power systems, we proposed an "approximate factorization" scheme that relaxes strict decomposability. This allows general MDPs to be flexibly separated into low-dimensional components while tolerating small errors. Coupled with a novel graph-coloring-based synchronous sampling strategy, this dramatically reduces the sample complexity of model-free RL.
The Controversy
A long-held belief is that the curse of dimensionality is inescapable without strong assumptions like perfect factorization. Skeptics argue that allowing approximation errors in learning components would degrade overall policy performance, and question the utility of structure-exploiting algorithms in "messy" real-world domains.
The Truth
Our method achieves near-minimax sample complexity even under approximate factorization. The required samples scale only with the largest component, not the full system size. Real-world experiments on wind farm storage control proved that our factorization-based RL vastly outperforms traditional RL, proving its robustness to model imperfections.
Future Directions
This work was heavily inspired by the actual physics of power systems (e.g., storage control states). Looking ahead, I am excited to extend the approximate factorization paradigm to broader grid operations, proving that real-world domain structures should inform AI algorithm design rather than imposing unrealistic mathematical assumptions on the physical world.
[1] C. Lu, L. Shi, Z. Chen, C. Wu, A. Wierman, "Overcoming the Curse of Dimensionality in Reinforcement Learning via Approximate Factorization," Proceedings of the 42nd International Conference on Machine Learning (ICML), 2025.