How does artificial intelligence change our interactions and where does it become a danger?

Our project staff addressed these questions Daniela Kreklow (TBS NRW) and Sarah Alex (agentur mark GmbH) in her presentation AI & Digital Violencewhich she will attend in September 2025 Cologne Student Union in two jointly organized events with the equal opportunities officer Stefanie Nicolini held.

It initially started with the Women’s meeting – that followed a week later Men’s meeting.


The aim of the events was to Awareness of digital violence and its connection to artificial intelligence to accomplish. Digital violence is no longer a fringe phenomenon: whether via social networks, messenger services or emails – insults, identity theft, deepfakes or discriminatory AI algorithms are part of a new digital reality.

In one Impulse lecture Daniele Kreklow gave first insights into the different manifestations of digital violence – from Cyberbullying above Deepfakes up to Doxxing and Voice Cloning.

Photos: Heike Fischer

A focus of the lecture was on the topic Deepfakes – i.e. AI-generated images, voices or videos that now appear deceptively real. What’s shocking is how easy it has become to create videos and images for deepfakes. With freely available tools and just a few clicks, deceptively real manipulations can be created – a circumstance that significantly lowers the inhibition threshold for misuse and illustrates how urgent education and digital competence have become.

Using examples, the participants discussed how difficult it is to recognize digital counterfeits today and what risks this poses Democracy, security and private life arise – but also what new strategies for dealing with them will be necessary.

Also in focus: AI discrimination. When algorithms are trained with unbalanced data, they reflect social prejudices – for example, when application AIs disadvantage women or people with a migration background. It became clear: AI can make many processes faster and more objective, However, this requires good framework conditions – from high data quality to comprehensible algorithms to ethical guidelines.

Photos: Heike Fischer

The keynote speech was followed by one Group work phasein which real case studies were discussed:

  • Double account on social networks
  • Deepfake photos of students
  • AI-generated fake voice messages
  • Manipulated customer reviews
  • Inequality through AI algorithms
  • Loss of trust in media

The participants analyzed where digital violence begins, what consequences it has and how we can counter it – individual, organizational and social. This resulted in a lively exchange about protective mechanisms, media competence and responsibility in the digital space.

Photos: Heike Fischer

Another thematic aspect was this I HAVE Actwhich has been in force since August 2024 and will be gradually implemented until 2030. The regulation not only brings Transparency obligations for AI systems with itself, but also obliges companies AI skills for employees who work with AI.

In connection with Deepfakes was also the Mandatory labeling for AI-generated content thematised. This should become mandatory and thus make an important contribution to curbing counterfeiting in the digital space and strengthening trust in digital media.


“We are all challenged – if you know the weak points, you can shape them. Awareness, diversity and transparency are the best protection mechanisms against digital discrimination.”

The central message: Digital violence affects us all. No one is completely protected, but everyone can contribute to making digital spaces safer – through education, solidarity and a critical approach to AI.

What was particularly impressive was: AI is both a valuable tool and a risk. The same technology that enables progress in medicine, education or administration can also be used for targeted manipulation, discrimination or deception.

That’s why, in addition to regulation, one thing is needed above all: a keen awareness of responsibility, fairness and humanity in digital change.


Would you like to deepen the topic in your institution? Whether workshop, keynote speech or dialogue format – we are happy to support you in shaping AI and digital responsibility together.

Daniela Kreklow
Future Center KI NRW | TBS NRW eV
daniela.kreklow@tbs-nrw.de

Sarah Alex
Future Center KI NRW | agency mark GmbH
alex@agenturmark.de

More concrete Offers: Diversity management, discrimination & AI « Future Center KI NRW


Back

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *