This morning at 10:00 a.m., the Ethics Council issued a statement on the topic of "Human and Machine - Challenges posed by Artificial Intelligence" at the Federal Press Conference.
I found it so exciting that I watched the press conference. Here is my summary and my own assessment.
The key message was: "Delegating tasks performed by humans to machines must not lead to a restriction of human action possibilities. Expansion of the action possibilities of certain groups must not lead to restrictions on other groups of people."
Four segments were explicitly considered in the statement:
Medicine:
AI has arrived in individual areas, but not yet in broad practice, especially in medical research.
It is important to keep "deskilling," the loss of competencies, in mind and avoid it. Complete replacement of doctors jeopardizes patient safety.
Education:
The use of AI is seen as ambivalent. The relationship between teachers and learners could become imbalanced, making it impossible to achieve educational goals.
Communication:
The question arises whether there should be a communication platform in public administration or perhaps as a foundation model. It would have to take place at the European level to ultimately stabilize democracy. This is by no means about expanding public broadcasting if, for example, existing commercial platform filter bubbles do not arise and users could exert influence.
One should listen more closely here from my point of view, as it could also be debated whether private sector platforms should be restricted to give public models a chance.
Public administration:
Decision improvement systems are already being used today. However, it is not proven that these systems necessarily lead to better decisions. Ways of decision-making must be made accessible to exclude discrimination based on decisions of such systems.
The question that arises for me overall is how AI can contribute to preventing or reducing injustices. The Ethics Council pointed out that certain existing injustices are reproduced or intensified by AI (presumably due to "unfair" training data). How can this be prevented? Currently, this is hardly possible, as training data can always be subject to a certain bias in their consideration. Depending on the origin and quality of the training data.
The Ethics Council suggests counteracting this by making the training data more balanced. Of course, the question arises as to who decides whether data is balanced or not. Here, too, there would be a certain bias, if not manipulation of the data in the sense of the respective creator of the model. "Explainable AI" is certainly essential. This reduces the black box of decision-making to make decisions more understandable and comprehensible.
The question also arose whether Germany and Europe now have disadvantages because most relevant AI systems are developed in the USA or China. The Ethics Council suggests that this may be the case. Also, based on socio-cultural differences to other regions of the world where AI systems are emerging. However, from the Ethics Council's point of view, this must not lead to a lowering of standards to counteract this. This is certainly to be welcomed. Europeans should develop something like "AI Ethics made in Europe" so that values are incorporated that do not necessarily apply worldwide.
I found Professor Nida-Rümelin's input in particular to be exciting regarding where innovations in AI come from. Disruptive business models often originate in Silicon Valley, but the underlying innovations mainly come from the US military, the Israeli military, and science.
For example, the MP3 format (which has nothing to do with AI) was developed at the Fraunhofer Institute. However, MP3 was then led to monetary success in the USA. From my point of view, this was certainly also due to the introduction of Apple's iPod. In a debate a few years ago, I heard that this was precisely why the “Agentur für Sprunginnovationen” was founded in Germany. Here we should also take a closer look at
who actually benefited from the existence and promotion of this agency. Hopefully, these are not just corporations. I currently have no information on how to collaborate with this agency. Can funding applications be submitted? If so, how?
This brings me to a thought that I would like to briefly address without going into detail. Could it make sense for AI to play a significant role in submitting funding applications? This would certainly improve equal opportunities in this area. Why? Today, it is often reserved for large organizations or those organizations that can afford to do so to submit certain applications. The effort involved can be immense to apply for funding, so smaller players do not even submit applications. Here, AI could certainly help increase equal opportunities. Are there such approaches already?
Now to the end of the press conference.
Prof. Dr. Alena Buyx noted in response that she hopes that the public debate will not demonize AI but also will not unconditionally follow the hype. Rather, there must be participatory debates. I agree with that. We are only at the beginning of the AI-accompanied journey and should not practice either blinkers or unrequested chasing of the current hype. Especially since most of us (at least in the medium and long term) will be users of AI rather than makers of our own AI innovations.
Here you can download the statement of over 280 pages:
This was only the prelude to the complex topics of ethics, equality, equal opportunities, and much more. We want to address these topics again and again here at PANTA RHAI in the future.
Comments