With colleagues of the MeVer team at CERTH-ITI, I am currently (February 2023) conducting a study on detecting gruesome imagery with the help of Artificial Intelligence. One aim: to reduce negative impact on those who have to view such digital content (e.g. journalists, human rights investigators) and spare them negative psychological consequences.
In our study, and with the help of experts that regularly have to view potentially traumatizing imagery, we aim to test and research in how far Artificial Intelligence (AI) can support in the detection of gruesome imagery that is potentially disturbing. We furthermore want to use this to apply a variety of filters that are intended to reduce the gravity of impact of graphic imagery on viewers.
This is to serve as some kind of ‘early warning system’ so investigators do not come across gruesome imagery unprepared. In the end, if successfully applied, such technology could become one component to support in trauma prevention and guard mental wellbeing.
If you want to participate in the study (open until 24 February 2023), you are invited to fill out a questionnaire / survey. However, please only do so if you really feel up to it. It cannot be stressed often enough how harmful the viewing of graphic imagery can be. That is why you should carefully read all the preparatory notices and only continue with the survey if you feel up to it. If that is the case, we warmly and gratefully welcome your participation.
More can be found here. From the article, you also find the link to the survey.
PS: Regarding mental wellbeing of journalists and investigators who have to view digital content of all kinds regularly, including the nasty and unpleasant stuff, I recommend the article Mental well-being of investigators on the digital frontline elsewhere on this blog.