By Marcel Labelle, one of the fathers of AQUOPS
Marcel Labelle is one of the founders of the Association québécoise des utilisateurs d'outils technologiques à des fins pédagogiques et sociales (AQUOPS). He has come out of retirement to "bring the comments of an old fool on the issue of artificial intelligence (AI) in education". He says it is worthwhile to look at this new concept. We need to talk about it, we need to explore it further. This is the purpose of his comments, which we present to you.
==
Let's not get ahead of ourselves with AI in education. Exploring simple questions and simple answers has been easy since the first software of its kind in 1966, ELIZA. In the examples given, we get a straightforward answer without too much complexity. This is exciting, but...
Parenthesis on the challenge of complexity.
A living being is a complex organism, because everything is interconnected. The human being has a nervous system, a cardiovascular system, a bone system, etc. We all know that if a health problem occurs at the cardiovascular level, it can have an impact on all the other systems. Here, we should not confuse complicated with complex. Walking in the city from one place to another can be complicated if the road is unknown. A GPS is a tool that can help solve a road complication. In the case of a human being, we cannot find similar tools to solve a health complication. It is too complex.
Also, AI is a form of intelligent machine that is equivalent to human intelligence. The main characteristics of strong AI include the ability to reason, solve puzzles, make judgments, plan, learn, and communicate. Weak AI most often uses different branches of mathematics to solve certain types of problems. We all want to arrive at complex answers with AI.
It's a little different with ChatGPT, or simply with the notion of a conversational agent.
A chatbotThe word "conversational agent", also known as "agent conversationnel", is a software that can interact with customers or a student, for example, and simulate a conversation like a human being. This word is formed from " cat "(pronounced in English), which refers to an online discussion, and bot "for robot. It wants to give the illusion that a program is thinking through sensible dialogue. In customer support, a chatbot provides ongoing interaction with the client to resolve their issue. The customer has an intention: to solve a problem that may be complicated. The chatbot must be able to guess his intention.
Conversational agents use language understanding to analyze and interpret questions and answers. They make use of theoretical computer science, propositional logic, and linguistics. Some topics may be difficult to address with chatbots. But it is not impossible. After a long learning experience, a chatbot can show signs of intelligence.
Also, the software of a chatbot is a learning system that introduces AI or machine learning (machine learning by first using keywords detected in a possible succession of questions from a customer. It uses the keywords to support the conversation by asking for more details on the first question and those that follow. The answers are found in part on the Internet with a tool like Google. The more questions there are, the more keywords there will be.
As the Q&A system grows more complex, it will become more complex. With hundreds of thousands of questions from multiple sources on the same topics, the system will learn more about the critical path to providing the right answer to customers. That's what machine learning is all about: repeating questions and answers from thousands of users who often have similar questions.
Subsequently, deep learning (deep learning appears, with the optimal critical path that the software can find through this complex network of questions and answers. We think of an image that we call a neural network. The user's experience can give the best solution paths by going through the neural network.
The problem with some conversational agents is that they appear too early as a solution without having been tested. Users reject them. The chatbots in mental health are popular, but they can be controversial. Customer support services for connected health objects struggle when it comes to answering questions that should be answered by health professionals.
A good way to do this by some providers is that they previously record the phone conversations of people who ask for support. This way they are able to determine sequences of keywords that can lead to a solution. This can be done without making the customer suffer too much with bad solutions.
Another wise and prudent way is to organize competitions of chatbots so that experimenters can find the best solution. It's a different way of doing machine learning. Sometimes, some people give up along the way, because the chatbot reaches its limits of complexity and cannot solve the problem. The specialists who design them will try to find better ways.
Beyond the simple tree structure guiding the conversational assistant through preconceived answers, how much AI is really in the chatbots? And what about in the future?
In short, we are still in the infancy of AI in education. However, we can hope...
These days, it is easier to fool the world with promises of AI conjugated in the future and conditional tense than to try to better understand the complexity of the current reality of human beings and education. Magical thinking sells better. Welcome to the world of naivety, because we have a lot of work to do before we increase the reality to convince us of the virtual dream.
A parenthesis also on generative AI.
We are already seeing the emergence of tools such as the Dall-E 2 image generator from OpenAI. There is a generative model for music called Jukebox that allows users to automatically create songs that mimic the styles of specific artists. AI is increasingly being used to automatically caption live audio and video. These types of content generators are becoming more sophisticated by the day and are reaching the point where people are having a hard time telling the difference between artificially rendered works and those created by humans. What do our artists think? In short, it's not only education that will be transformed by AI!
The next Colloquium of the Association québécoise des utilisateurs d'outils technologiques à des fins pédagogiques et sociales (AQUOPS) will take place on April 4, 5 and 6, 2023 in Quebec City.