Imagine an Artificial Intelligence (AI) developed that wants to develop AI-driven clinical decision support system (CDSS). Developing and deploying such system can easily have unforeseen consequences; it is e.g., difficult to ensure the AI-driven CDSS are not biased, as they might be the person collecting the data to train the model may inadvertently transfer their bias to the dataset. The AI company is aware of this danger, but foreseeing any impact a new tech-project will have on society as a large is in itself a specialisation. This is what YAGHMA helps with.
They develop, customise and refine impact monitoring systems that foresees the intended and non-intended ethical, societal, legal, environmental and governance impacts of projects involving new technologies, as well as in Digital Health.
In the LUCIA Project, the YAGHMA team focuses on AI Impact Assessment and on the Analysis of Ethical and Societal Challenges in design, development, and implementation of AI in LUCIA.