How Large Language Models work: Workshop

DATE:Thursday, April 24, 2025
TIME:15:00 – 17:00
VENUE:Transformer, Rennweg 89a, 1030 Wien

On April 24, 2025, the VCLA organized a workshop for project Transformer as part of the Future Fit Festival. This workshop was designed to make the participants understand the basics of large language models (LLMs), their capabilities, limitations, and the conceptual differences between human creativity and machine learning.

 

 

First, the participants shared practical applications of large language models in everyday life. Prof. Szeider, Co-Chair of the VCLA, then introduced the basic concepts through an interactive exercise based on Claude Shannon’s information theory and showed how patterns in language can be identified and predicted.

The event clearly showed that large language models such as ChatGPT are not knowledge repositories, but pattern recognition systems that have been trained with large data sets to generate human-like texts. These models process language in small units, so-called tokens, and calculate probabilities for generating text.

Through practical demonstrations, participants gained an insight into the strengths and limitations of these AI systems. The workshop provided a valuable perspective on how large language models work in our increasingly AI-integrated society.

The next computer science workshop at the Transformer site, “Understanding Computer Science,” is scheduled for May 22.

Comments are closed.