Do you remember when chess tournaments were broadcast on television?
People who are good at chess are considered intelligent. In the 1990s, IBM also wanted to show that their computers had something on the ball.
In 1997, "Deep Blue" won against the world chess champion Garry Kasparov. However, IBM's supercomputer with 216 special
chess processors wasn't intelligent, but "only" had very clever programming
(written in the C programming language).
What we now call "generative AI" (GenAI) works on a different principle and has the ability to generate content, e.g., texts, images, or music. The results can be impressive, but the term "intelligence" still doesn't fit. Since 2023 at the latest, language models have been making headlines that can understand natural (written) language and respond in kind: ↗ChatGPT by OpenAI, ↗Claude.ai by Anthropic, and Google ↗Gemini. Also Microsoft/Bing ↗Copilot, which uses ChatGPT.
A language model is based on a neural network that has been trained with huge amounts of text data
to recognize patterns in language.
To generate texts themselves, language models use statistical relationships to predict
which words or sentences are likely to follow next.
But this type of AI has no awareness of the content it produces. GenAI generates responses only based on probabilities, not through real understanding of their meaning. Language models cannot recognize physical and logical principles and contradictions by themselves, but only know this through the presence of language patterns from the training data. For this reason, in 2021, linguist Emily Bender described such language models as "stochastic parrots," because they just chatter without really understanding the content of their chatter. And indeed, the many new users in 2023 initially encountered numerous illogical answers and fabricated statements that were obviously false. |
AI Image AI Image Generator: In German,
there's a particularly long word "Donaudampfschifffahrtsgesellschaftskapitän" from an old song.
The word means Danube-Steamship-Travel-Company-Captain.
The image is an AI-generated visualization of this word. It looks quite nice at first glance.
An explanation for these abnormalities lies in the task given: "Show the captain of a steamship on a river." This sounds simple, but it isn't. A captain would normally be depicted in the wheelhouse. However, then one couldn't see that it's a steamship. And one wouldn't see that it's traveling on a river. To meet all requirements, the AI seems to have bent the realities we know a bit. Or better said: The image generator wasn't "aware" that the construction is not logical or technically incorrect. The generator only recombined image patterns it knows from its training data in a way that fits the requirement. |
2022/2023: A Chattering AI for Everyone Since the end of 2022, anyone can play around with an AI on their PC or smartphone:
The US company OpenAI made its Artificial Intelligence
↗"ChatGPT" publicly accessible
for free as a website on November 30, 2022.
Only registration with an email address and (unfortunately) a phone number is required.
I tried it out immediately back then and was instantly impressed that the website understands and answers any questions in German as well. ChatGPT's "mental" flexibility is shown by the fact that you can have it write a fantasy story according to your own specifications in seconds. If you ask the AI for information on well-documented topics, you get impressively good answers. However, if you ask for information that was not or insufficiently included in the training data, the AI chatters something that might be wrong. The provider also warns about this. The freely usable ChatGPT Version 3.5 was complemented in 2023 by the then paid Version 4.0. The main difference is that Version 4 was coupled with the image generator DALL·E. Now, for a monthly fee, you can also have images generated, like the one shown above. Just as
there's a problem with strange inaccuracies in the image shown above,
there's a similar problem with the text output.
From the beginning, it was noticed that ChatGPT confabulates -
hallucinating missing information.
In humans, we speak of ↗confabulation when someone tries to recall more memories from their mind than they actually remembered. The brain then fills the memory gaps with things that are likely to fit. The person being questioned doesn't even know themselves that it's not a real memory. And we all do this. Everyone has gaps that they automatically fill with creative explanations. When you're not under pressure, you can simply casually say: "I don't know that."
However, when you're under pressure to make a statement now, or
when you have to deliver an essay of a certain length, for example, the tendency to confabulate increases.
Under the pressure to meet expectations, you waffle something that sounds fitting - hoping
that it might be correct (provoked confabulation).
|
AI in Specialized Applications A few years ago, on TV
you could occasionally see new technology being used in sorting facilities.
Back then, it wasn't called AI yet, but rather a system for pattern recognition.
Objects passing by on a conveyor belt were filmed, and the pattern recognition
could decide in fractions of a second how each individual object should be further processed.
For example, visual pattern recognition can distinguish bad from good fruit, or in a waste sorting facility, items can be sorted by material type. And at my dermatologist's office, the computer shows whether my moles have changed. In bureaucracy-plagued Germany, one must naturally ask how AI could help reduce bureaucracy. AI could, for example, assist in decision-making by analyzing information and providing recommendations with justifications to the case worker. Additionally, AI systems could search for irregularities and patterns that indicate human errors or fraud. However, all of this would require that the necessary information is available digitally and not on paper as it has been. This means that the lack of digitization is itself an annoyance, and in the next step, it also prevents the use of AI as a relief. Another possibility to relieve administrative staff would be the use of a chatbot as an information system. Some city administrations are preparing pilot projects to answer the most common general questions from citizens using AI. While this doesn't replace a case worker, it does relieve staff from answering the same questions over and over.
Companies are trying to replace their product catalogs or boring specifications with a
friendly AI shopping advisor.
But all beginnings are difficult: The press reported how the first chatbots of major providers
were tempted to promise free additional services
or price discounts on request, which are not actually offered in reality.
This is possibly the same effect as the aforementioned response pressure and provoked confabulation.
Other examples of specialized applications are language translators, such as those from the German company DeepL, which can be integrated into existing applications if necessary. API interfaces are offered for this purpose, through which data from a specialized application is transmitted to the external AI and back. In production, AI can be used to predict when a machine needs maintenance based on machine sensors,
before a breakdown occurs.
Another example concerns the growth and change of the population, e.g., to be able to predict how many teachers and other professionals from completely different occupational groups need to be trained to meet the future needs of the economy and society. |