There are almost 5 million registered fall incidents per year in Germany. The annual costs for fall treatments amount to more than 500 million euros. Yet up to 30% of all falls can be prevented. Systems that use artificial intelligence (AI) analyze risk factors, predict individual fall risks, and could thus digitally support fall prevention for caregivers. However, existing systems often do not consider all relevant risk factors (Seibert et al. 2021). For example, AI-based fall prevention often relies on gait analysis, despite the fact that the risk of falling is increased by 56 percent for many patients taking as little as half their daily dose of hypnotics and sedatives. Medication data, however, are often unavailable for use in AI systems because access to the data is technically and legally complex. Thus, the potential of AI-based systems for fall prevention has not been fully realized to date. The aim of this project is to make the data relevant for risk assessment according to established care standards conveniently available digitally to the staff in the facilities. Fall prevention is only one example from a field of similar nursing problems such as pressure sores, urinary incontinence, delirium, etc. The data integration developed in the project will be used for the purpose of the project. With the help of the data integration and data analysis methods developed in the project, our project will enable nursing care staff to leverage AI assistance more efficiently.

Artificial intelligence (AI) methods can be used to support nursing care. Research projects in the field of nursing and AI are confronted not only with difficulties in accessing representative and high-quality data, but also with the challenge of involving nursing facilities and actors in nursing practice in research and development and successfully collaborating with them during the course of the project. The ProKIP accompanying research project investigates and promotes the integration of AI solutions into nursing practice. ProKIP accompanies, advises, networks and evaluates research projects in the BMBF funding program "Making repositories and AI systems usable in everyday nursing care".

The consequences of climate change have become visible in heavy rainfall events. During heavy rainfall, polluted wastewater enters the natural environment and leads to an increase in the concentration of pollutants in groundwater and rivers. In the RIWWER project, with funding from the German Federal Ministry of Economics and Climate, we are developing machine learning methods that use weather data and sensor data from the wastewater system to improve control of the wastewater system. The aim is to minimize the amount of pollutants that run off into groundwater and surface water

Heating in buildings accounts for a major part of the world’s energy consumption and CO2 emissions. Saving heating energy can help to slow down global warming.

Heating energy consumption can be reduced by accounting for local weather conditions. However often it is not clear to customers whether and when it will pay off to install a weather guided heating control system.

In collaboration with SEnerCon GmbH and funded by the Berlin Program for Sustainable Development we develop Machine Learning Methods to help customers to estimate the energy savings they could achieve with weather guided heating control.

Consumer behaviour is a substantial driver of climate change. Studies indicate that even environmentally conscious consumers' behaviour is not as sustainable as it could be. A major reason for that is that when buying decisions are made, relevant sustainability information is not available.

In collaboration with the team of Prof. Tilman Santarius, TU Berlin and ecosia and funded by the Federal Ministry for the Environment we develop an Artificial Intelligence (AI) based Assistant that will support consumers to make more sustainable shopping decisions.

Goal of this Green Consumption Assistant (GCA) is to surface sustainability information when customers are browsing the web. At Berlin University of Applied Sciences we will develop an open data base with sustainability information. This data base will not only be useful for the Green Consumption Assistant, it will also support other AI initiatives to build new data products.

Machine Learning (ML) methods are standard components in modern software systems and influence our decisions every day. Often however it takes years to translate successes in research into useful ML innovations for end users. One of the reasons for this gap are challenges related to data quality.

In collaboration with Amazon Research and Prof. Sebastian Schelter at the University of Amsterdam we develop methods for better automation of monitoring (e.g. Schelter et al, VLDB, 2018) of data quality, improvement of data quality (e.g. Biessmann et al, CIKM, 2019) and prediction of data quality problems in ML production systems (e.g. Schelter et al, SIGMOD, 2020)

Many of us use Artificial Intelligence (AI) Systems built using Machine Learning (ML) methods everyday. Especially when judges or doctors are using assistive ML, the right level of trust in an AI system is critical. Too much or even blind trust can lead to ill-considered decisions - and not enough trust into assistive AI can ignore valuable information.

In recent years, many methods were proposed to render AI systems and their predictions more transparent, in order to foster trust in AI systems. To what extent transparency really increased trust in AI systems remained largely unexplored.

In collaboration with Philipp Schmidt, Amazon Research, and Prof. Timm Teubner, TU Berlin, we are investigating whether and when transparency in AI actually increases trust in AI systems.

Preliminary results indicate that transparency can indeed often increase trust and can substantially improve human-AI collaboration (Schmidt und Biessmann, 2018).

However we also find that transparency in AI systems can lead to the opposite effect, in some cases transparency leads to blind trust or ignorance of an assistive AI's recommendations (Schmidt et al, 2020).

An important aspect of our results is that quality metrics of transparency should always take into account human cognition (Biessmann und Refiano, 2019).

Other results of our experiments suggest that a wide range of factors impact the effect of transparency on human-AI collaboration and trust. Our results indicate that task difficulty and personality traits such as risk aversion can alter the effect of transparency on trust in AI systems (Schmidt und Biessmann, 2020).

In a future where digital data from from a variety of sources are abundant and widely available to non-governmental experts and independent analysts, and where virtually any type of digital media can be generated in ways that can make them effectively indistinguishable from real data, issues of data authentication in monitoring and verification deserve a careful and systematic analysis. In this project, funded by the Deutsche Stiftung Friedensforschung (German Foundation for Peace Research), Prof. Dr. Felix Bießmann (Berliner Hochschule für Technik), Prof. Dr. Rebecca D. Frank (University of Tennessee, Knoxville), and Prof. Dr. Alexander Glaser (Princeton University) examine the potential role of citizen-based monitoring and verification for peace and security. This two-phase research project seeks to systematically assess the long-term opportunities for citizen-based monitoring using the important example of satellite imagery in the context of nuclear monitoring and verification, and to understand the risks and challenges, often enabled by these very same techniques and tools. Leveraging advanced machine-learning techniques to generate synthetic imagery of relevant sites, we can produce dedicated datasets under carefully controlled conditions. This imagery will then be used to develop and examine concrete monitoring scenarios. This will be followed by qualitative interviews and hands-on exercises with focus groups and data users in order to examine future challenges for citizen-based monitoring. We will place a particular emphasis on the possibility of image spoofing and fabricated data, examine broader ethical issues related to persistent earth observation, but also consider safeguards that could make citizen-based monitoring a viable and robust tool in support of peace and security.