back to top

    Researchers from the Wrocław University of Technology “broke” the Chat GPT

    A group of scientists from the Faculty of Information and Communication Technology as part of the CLARIN project decided to check whether artificial intelligence is really so versatile. In their experiment, they asked nearly 40,000 questions in 25 categories. They published the results of their observations in a scientific article.

    ChatGPT interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

    Nobody has done that yet

    “OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT) and revolutionized the approach in artificial intelligence to human-model interaction. The first contact with the chatbot reveals its ability to provide detailed and precise answers in various areas. There are several publications on ChatGPT evaluation, testing its effectiveness on well-known natural language processing (NLP) tasks. However, the existing studies are mostly non-automated and tested on a very limited scale. In this work, we examined ChatGPT’s capabilities on 25 diverse analytical NLP tasks, most of them subjective even to humans, such as sentiment analysis, emotion recognition, offensiveness and stance detection, natural language inference, word sense disambiguation, linguistic acceptability and question answering. We automated ChatGPT’s querying process and analyzed more than 38k responses. Our comparison of its results with available State-of-the-Art (SOTA) solutions showed that the average loss in quality of the ChatGPT model was about 25% for zero-shot and few-shot evaluation. We showed that the more difficult the task (lower SOTA performance), the higher the ChatGPT loss. It especially refers to pragmatic NLP problems like emotion recognition. We also tested the ability of personalizing ChatGPT responses for selected subjective tasks via Random Contextual Few-Shot Personalization, and we obtained significantly better user-based predictions. Additional qualitative analysis revealed a ChatGPT bias, most likely due to the rules imposed on human trainers by OpenAI. Our results provide the basis for a fundamental discussion of whether the high quality of recent predictive NLP models can indicate a tool’s usefulness to society and how the learning and validation procedures for such systems should be established.”,

    they wrote.

    The more difficult the tasks, the worse Chat GPT performed

    According to dr Maciej Kawecki on tysol.pl, the Poles checked how he reacts to sarcasm, whether he understands jokes and is able to capture the broader context of statements. Unfortunately, the more difficult the tasks, the worse Chat GPT performed.

    Chat GPT made mistakes that almost anyone would have noticed.

    More in section

    2,222FansLike
    359FollowersFollow
    1,164FollowersFollow