Skip to content
Home » Building a better thermostat

Building a better thermostat

Oak Ridge National Laboratory (ORNL) researchers designed and field-tested an algorithm that could help homeowners maintain comfortable temperatures year-round while minimising utility costs.

The algorithm learns over time to keep the home at residents’ desired temperature settings while minimising energy costs and adjusting to environmental conditions, all with no existing knowledge of the building. Results suggest the algorithm could save homeowners as much as 25% on annual utility bills.

“We found it’s not practical to try to create a different model for each individual building across a neighbourhood or city,” ORNL’s Helia Zandi said. “We wanted an algorithm we could apply to different buildings that would automatically learn the characteristics of the environment and how to minimise operating costs while maximising comfort.

“The team’s goal is to make the model universal so it can adapt to any system with the least amount of data necessary,” said Matt Lakin.

Research abstract

Intelligent Heating, Ventilation, and Air Conditioning (HVAC) control using deep reinforcement learning (DRL) has recently gained a lot of attention due to its ability to optimally control the complex behaviour of the HVAC system. However, more exploration is needed on understanding the adaptability challenges that the DRL agent could face during the deployment phase.

Using online learning for such applications is not realistic due to the long learning period and likely poor comfort control during the learning process. Alternatively, DRL can be pre-trained using a building model prior to deployment. However, developing an accurate building model for every house and deploying a pre-trained DRL model for HVAC control would not be cost-effective.

In this study, we focus on evaluating the ability of DRL-based HVAC control to provide cost savings when pre-trained on one building model and deployed on different house models with varying user comforts. We observed around 30% of cost reduction by pre-trained model over baseline when validated in a simulation environment and achieved up to 21% cost reduction when deployed in the real house.

This finding provides experimental evidence that the pre-trained DRL has the potential to adapt to different house environments and comfort settings.

To read the full research paper, click here