Forecasting Energy Cost Savings in Weather-Guided Heating Control

Heating in buildings accounts for a major part of the world’s energy consumption and CO2 emissions. Saving heating energy can help to slow down global warming.

Heating energy consumption can be reduced by accounting for local weather conditions. However often it is not clear to customers whether and when it will pay off to install a weather guided heating control system.

In collaboration with SEnerCon GmbH and funded by the Berlin Program for Sustainable Development we develop Machine Learning Methods to help customers to estimate the energy savings they could achieve with weather guided heating control.

Green Consumption Assistant

Consumer behaviour is a substantial driver of climate change. Studies indicate that even environmentally conscious consumers' behaviour is not as sustainable as it could be. A major reason for that is that when buying decisions are made, relevant sustainability information is not available.

In collaboration with the team of Prof. Tilman Santarius, TU Berlin and ecosia and funded by the Federal Ministry for the Environment we develop an Artificial Intelligence (AI) based Assistant that will support consumers to make more sustainable shopping decisions.

Goal of this Green Consumption Assistant (GCA) is to surface sustainability information when customers are browsing the web. At Berlin University of Applied Sciences we will develop an open data base with sustainability information. This data base will not only be useful for the Green Consumption Assistant, it will also support other AI initiatives to build new data products.

Data Quality in Machine Learning Systems

Machine Learning (ML) methods are standard components in modern software systems and influence our decisions every day. Often however it takes years to translate successes in research into useful ML innovations for end users. One of the reasons for this gap are challenges related to data quality.

In collaboration with Amazon Research and Prof. Sebastian Schelter at the University of Amsterdam we develop methods for better automation of monitoring (e.g. Schelter et al, VLDB, 2018) of data quality, improvement of data quality (e.g. Biessmann et al, CIKM, 2019) and prediction of data quality problems in ML production systems (e.g. Schelter et al, SIGMOD, 2020)

Transparency in Machine Learning

Many of us use Artificial Intelligence (AI) Systems built using Machine Learning (ML) methods everyday. Especially when judges or doctors are using assistive ML, the right level of trust in an AI system is critical. Too much or even blind trust can lead to ill-considered decisions - and not enough trust into assistive AI can ignore valuable information.

In recent years, many methods were proposed to render AI systems and their predictions more transparent, in order to foster trust in AI systems. To what extent transparency really increased trust in AI systems remained largely unexplored.

In collaboration with Philipp Schmidt, Amazon Research, and Prof. Timm Teubner, TU Berlin, we are investigating whether and when transparency in AI actually increases trust in AI systems.

Preliminary results indicate that transparency can indeed often increase trust and can substantially improve human-AI collaboration (Schmidt und Biessmann, 2018).

However we also find that transparency in AI systems can lead to the opposite effect, in some cases transparency leads to blind trust or ignorance of an assistive AI's recommendations (Schmidt et al, 2020).

An important aspect of our results is that quality metrics of transparency should always take into account human cognition (Biessmann und Refiano, 2019).

Other results of our experiments suggest that a wide range of factors impact the effect of transparency on human-AI collaboration and trust. Our results indicate that task difficulty and personality traits such as risk aversion can alter the effect of transparency on trust in AI systems (Schmidt und Biessmann, 2020).

Citizen-based Monitoring for Peace & Security in the Era of Synthetic Media and Deepfakes

In a future where digital data from from a variety of sources are abundant and widely available to non-governmental experts and independent analysts, and where virtually any type of digital media can be generated in ways that can make them effectively indistinguishable from real data, issues of data authentication in monitoring and verification deserve a careful and systematic analysis. In this project, funded by the Deutsche Stiftung Friedensforschung (German Foundation for Peace Research), Prof. Dr. Felix Bießmann (Berliner Hochschule für Technik), Prof. Dr. Rebecca D. Frank (University of Tennessee, Knoxville), and Prof. Dr. Alexander Glaser (Princeton University) examine the potential role of citizen-based monitoring and verification for peace and security. This two-phase research project seeks to systematically assess the long-term opportunities for citizen-based monitoring using the important example of satellite imagery in the context of nuclear monitoring and verification, and to understand the risks and challenges, often enabled by these very same techniques and tools. Leveraging advanced machine-learning techniques to generate synthetic imagery of relevant sites, we can produce dedicated datasets under carefully controlled conditions. This imagery will then be used to develop and examine concrete monitoring scenarios. This will be followed by qualitative interviews and hands-on exercises with focus groups and data users in order to examine future challenges for citizen-based monitoring. We will place a particular emphasis on the possibility of image spoofing and fabricated data, examine broader ethical issues related to persistent earth observation, but also consider safeguards that could make citizen-based monitoring a viable and robust tool in support of peace and security.