When it comes to ethics, Artificial Intelligence (AI) is faced with the chicken-egg problem: does technology follow people’s training – or is it already developing its own understanding of ethics? Matthias Schulz, Manager Collaboration and Talent Solutions at IBM, clarifies prejudices in an interview with WYZE Project.
WYZE: Mr. Schulz, CIMON recently celebrated its debut on the ISS, the first autonomous astronaut assistant with an Artificial Intelligence (AI) in space. On Earth, on the other hand, AI obviously has had teething problems: depending on the reading, technology is the “greatest threat of our time”, an “opportunity for the German economy to miss the boat” or “hardly of interest” to digital bosses. What’s wrong with our relationship with AI?
Matthias Schulz: Nothing is wrong. This orientation phase has existed and still exists for all technological innovations that represent new ‘gravitational centres’ for social change. Unlike the steam engine, the railway or television, AI is nowadays developed and discussed globally by many actors. This, in turn, reinforces the desire for a “correct” assessment or classification. But the chase is illusory.
WYZE: Which approach do you recommend?
Matthias Schulz: A good starting point is the understanding of the roles we have from AI. For me, the ‘A’ in ‘AI’ currently stands even less for ‘Artificial’ than for ‘Augmented’. In other words, artificial intelligence takes on the role of an assistant function, a technological helper in the background. CIMON is a good example, but in a special context: in this case, the AI should be a conversation partner, play music and document experiments. This sounds trivial compared to AI applications on Earth, but it is an important factor in the extremely balanced systems of the ISS and future space missions.
WYZE: So whether and how AI works always depends on your field of application?
Matthias Schulz: Right. AI tools in human resources or financial accounting have standards and goals that are of little help to you in the case of AI-supported networking in the ‘Smart Factory’. This point is also quickly overlooked in the discussion about ‘the’ AI: we are not talking about the linear development of one but of many different AI variants.
AI is, thus, gradually growing into society in many places. So, there is also enough time to decide which AI applications remain assistants and which ones are allowed to take on more authority and responsibility. We are experiencing this very specifically with the subject of ‘autonomous driving’: Nobody would give an A.I. full control of a vehicle today. But semi-autonomous systems already ensure greater safety and save lives. So, why shouldn’t we have the feeling in the foreseeable future that the machine is moving better?
WYZE: Such confidence-building certainly works better if you can speak to AI fluently. Applications such as Alexa, Siri or voice navigation in the car are in this respect rather ‘artificial’ than ‘intelligent’…
Matthias Schulz: Yes, but we are still at the beginning of the journey. The mastery of language is without question one of the biggest limits of AI. Google Duplex has shown very impressively that AI software can pass for a human on the phone. However, a hair appointment is not a Turing test. As soon as you abruptly change the subject and break the pattern in which AI moves safely, phrases or repetitions must occur at some point. And the illusion is already broken.
WYZE: Maybe a human AI conversation culture that is not purely technical or purely human – but something completely new -needs to be developed. IBM seems to be heading in this direction with the AI application Watson Debate …
Matthias Schulz: Watson Debate discusses ethical and political issues with us on the basis of different pro/con positions, which it finds on the Internet; for example, on the subject, “should violent computer games be banned?” AI does not make a decision, however, it debates the advantages and disadvantages of all possible online sources, studies, professional articles, opinions, etc. that are researched within fractions of a second.
It is always a matter of objectively discussing at least two subject positions – to which Watson Debate, mind you, was not trained in advance. This is a very interesting ‘intuitive’ element. Not in the sense of human intuition, but through the selection and ways of argumentation it may seem like this to us, as participants in the discussion.
WYZE: Which, in turn, is a big step over the mentioned limit of language control…
Matthias Schulz: Yes, especially since Watson Debate can now also defend a position in a spoken discussion. That’s a completely different experience than asking Alexa about the weather. After a few arguments, you are so in the flow of the discussion that you forget, more and more, who you are actually talking to.
WYZE: Two years ago, Microsoft’s bot “Tay” was also supposed to learn from the ‘swarm intelligence’ of the web, but then proved to be a willing Twitter helper in spreading racist slogans. Is there an ethics algorithm that prevents this for Watson Debate?
Matthias Schulz: As I said, the core idea here is a completely different one, namely to present a spectrum of arguments objectively, precisely and comprehensibly. Not to reproduce the most commonly shared opinions or keywords. “Tay” was more concerned with deriving a majority opinion from the quantity of information available. By the way, this was not a special AI error by “Tay”, this happens to every search engine that carries out a weighting based on frequency.
Watson Debate, however, has an intelligence that understands the content and can weigh up which positions are (weighted) stronger or weaker. We accompany this, by the way, with the consortium “Ethics in AI”; because, of course, we give Watson Debate our values – one cannot prevent that at all. But precisely because many military budgets are currently flowing into AI development, now is the right time to start this discussion on AI ethics. And one can and should certainly participate in it.
WYZE: Thank you very much for the interview!
Preview: Tractors that roll off the assembly line with AI support? Photo chats with Watson to get standing machines up and running again? This and more about IBM Watson in the “Factory of the Future” can be found in the second part of the interview with Matthias Schulz.