The Trends Observatory of the eLearn Center at the Universitat Oberta de Catalunya (UOC) has entered into collaboration with observatories at the Tecnológico de Monterrey (Observatorio de Innovación Educativa) and the Pontificia Universidad Católica de Perú (Novedades Académicas), in which they regularly organize expert debates, which we call “dialogues”.
The most recent of these, “Dialogue on Artificial Intelligence and Chatbots in Education” – in which I took part – was in November, where I had the chance to talk to César Beltrán Castañón, from the Pontificia Universidad Católica de Perú (PUCP), and Omar Olmos López, from the Tecnológico de Monterrey, the institution that moderated the debate.
The representatives from the PUCP, the Tecnológico de Monterrey and the UOC during the “Dialogue on Artificial Intelligence and Chatbots in Education”.
This was the third in the series of dialogues organized as part of the collaboration. The first in the series was the “Dialogue on Monitoring Graduates“, with the participation of Carles Rocadembosch, director of Alumni, as the UOC’s representative. The second, the “University-Business Dialogue“, featured the participation of Alberto Sánchez, transfer specialist with the UOC Knowledge Transfer and Research Support Office (OSRT).
In this final debate on artificial intelligence and chatbots in education, we shared ideas on a number of topics, including how artificial intelligence, chatbots and their roll-out will affect universities (both higher education and the way of learning), the role that artificial intelligence will play on its arrival in the classroom among teaching staff and the acceptance it will get from the students. After the interesting debate, I reached the following conclusions:
Video: “Dialogue on Artificial Intelligence and Chatbots in Education”
The following is a transcription of my own notes and thoughts on the questions posed during the debate:
Question: Artificial intelligence (AI), chatbots and other technological resources and tools have grown in many activities and have even replaced routine and mechanical work and have improved performance in terms of accuracy, regularity and costs. In this sense, should they be seen as a threat to the job market? What degree of confidence can be placed in these technological tools and resources?
Guillem Garcia Brustenga: Right now, I don’t think that chatbots and AI will replace professors; by contrast, they have to replace certain repetitive tasks of a low cognitive level, which will allow professors to be released of these tasks and devote their time to more critical, strategic and high cognitive-level duties.
Chatbots in education will work as colleagues for professors, administrative and services staff, and especially for students. This machine-human interaction is a key association in which, presumably, jobs won’t be lost, just specific tasks, such as answering administrative questions related to dates for submitting work or correcting exercises. Each one will do the task they can do the most efficiently.
We have the vision of professors who train bots and algorithms to mentor and answer the students in the same way as the professors do: with the same explanations and the same examples. The professors train the bots to get to where they are unable to: many more people, at any time and in any time zone.
Q: Does the university have to redefine itself to incorporate the changes brought about by technology?
GGB: Yes, of course, on a number of fronts. On the technological front, obviously; we need up-to-date, relevant, quality data, etc. and an infrastructure and datamart that let us work with them, not just in the usual way (generating spreadsheets or reports) but to generate predictions and machine learning models to be able to offer AI-based services.
We are seeing a digital transformation, not just a technological change. AI will transform many different roles in the university. To manage such great change, we will need to offer staff training so they can adapt to their new roles in the AI age. There has to be a cultural change among the managers and the Office of the President (they need to know what can and cannot be done with AI, understand its impact, etc.), among the professors (adapt to new tasks and assign others, understand the AI models), among the IT staff (how to work with the data, how to create models), among the management staff, etc. And as regards the students too, a transition plan needs to be put in place that is not overly burdensome. Here we need to ensure good design focused on users and usability.
Q: How is the use of AI and chatbots being integrated in the way we learn in higher education?
GGB: An important distinction in the field of AI is between weak AI (or narrow AI) and strong AI (or artificial general intelligence, AGI). The first consists of a computer program that uses AI techniques (machine learning, deep learning) and is designed to solve a specific problem (from playing chess to detecting pedestrians and obstacles in the way). AGI, however, refers to machines with the ability to solve many types of problem in the way that humans do; in other words, they can understand their environment and reason like a human. All the AI applications we’ve seen so far are examples of weak AI. AGI doesn’t exist yet.
However, in recent years, we’ve seen some very significant advances in AI techniques that allow us to predict a viable AGI in the next two decades. Perhaps in twenty years, our professors will be a chatbot based on artificial general intelligence.
In the meantime, we think that it’s worth starting to consider all the questions that will arise from this situation. Some of the answers can be applied now, given the present state of the technology.
And with our gaze focused on the twenty-year horizon, we also have to start thinking as a society on the role that AI will play in education, the implications of AGI and the role that people and society should have in all this. So, what happens if we apply this AGI to education? What should AI, a bot or an assistant working in teaching be like in the long term?
First of all, we take a look at the present chatbots that we find in different educational institutions. These chatbots can be classed in various ways.
Classification of chatbots by educational and non-educational aim
GGB: With no educational aim: these are chatbots that take part in
educational tasks of an administrative nature (student guidance) and a support nature (to answer FAQs). Most educational chatbots are in this group.
With an educational aim: these are designed to promote teaching and learning directly. They are basically of two types:
- Tutors to provide scaffolding for the learning process: they can adapt, select and sequence content in line with the student’s needs and pace, encourage reflection and provide learning motivation.
- Exercise and practice programs for skills acquisition: They offer a stimulus in the form of a question or problem, and the student provides an answer. This is assessed automatically by the chatbot, which gives the student immediate feedback.
Classification of chatbots by task
- Administrative and management tasks.
- Answering FAQs.
- Student mentoring.
- Specific skills practice.
- Student learning assessment.
If we look ahead and start to think of an assistant that really mentors the student in a more complex way, we start to pose a great many ethical dilemmas and risks related to biases. The following are examples of this:
Honesty and transparency: is it fair to try to cheat students and not tell them that the assistant professor is an AI bot?
Bias due to incorrect training of the machines: the responses of the AI bot may be incorrect because it was trained with data that may be incorrect, such as previous answers given by other students (in debates), previous interactions with the student or material from the internet that has not been validated. We may even be transmitting biases that people have, such as misogyny, racism, etc. The human expert (professor) must be present in this process to validate the training data. We have to put a “human in the loop” to ensure that there is no bias.
Biases in the models: the final (programmed) aims of an educational AI bot could be varied and even contradictory. The objective may be for the student to learn (and so run the risk of the AI bot setting difficult and very challenging activities that could lead to the student failing). Or it could be for the student to pass (and then there is the danger that the AI bot sets tests that are too easy, suggests the answers and makes passing the course too simple, thereby hindering students’ learning). The objective may even be for the student to enrol on a lot of courses (and then perhaps the AI bot does not provide realistic information about a student’s ability to take so many courses and hides potential difficulties in passing the course). Each stakeholder in the teaching process (teaching staff, students, marketing, finance departments, society, job market, etc.) may have opposing objectives. In this case, we need to put “society in the loop”, i.e. involve it and draw up an educational social contract.
Q: Is technology based on AI and chatbots in use in your university? What projects are currently under way?
GGB: At the UOC, a 100% online university, we have an advantage: we have a lot of historical data. This allows us to believe that it is possible to use these technologies. We currently have two projects under way.
The first, Soul University, consists of a series of pilot trials with our data and some chatbot engines to validate the UOC’s vision of chatbots + AI. The intention is that on the basis of these trials, the Soul University initiative should act as a platform to forge ahead with transformation projects in all areas of the university (teaching, learning and management). We think that it is applicable to many of the activities in a student’s life cycle.
Besides this, LIS is an innovation and research project that provides predictive models to generate an adaptive system on the Campus that can help students complete their course successfully.
Q: What do you predict the integration of AI in the University will be like and how will it be received by the teaching staff, management staff and students?
GGB: At the UOC, we have started experimenting and trialling an educational chatbot pilot using different technologies available on the market. We’ve also started to think and ask ourselves about the design functions and the risks of the educational chatbots of the future, when artificial intelligence will enable us to do things that today are merely in a research or technological prospecting phase.
With the chatbot as the vehicle, it appears that artificial intelligence may offer a blend of attributes that might at first appear contradictory: providing education produced en masse and personalized education/care at the same time.
Consequently, I think that integration will be inevitable. Good change management will also be inevitable, as there will be a redefinition of important roles, among teaching staff, students and management. A chatbot has processes behind it, which entail different training tasks from the ones we are used to; therefore, there is bound to be some resistance to change.
However, the evolution of society is in our favour. Universities or education won’t be the first environment to live with AI and chatbots, we’re already seeing this.
|Author of the post
Guillem Garcia Brustenga (@txerdiakov) has a telecommunications engineering degree from the Universitat Politècnica de Catalunya (UPC) and is director of Trend Spotting and Analysis at the UOC’s eLearn Center. He is interested in innovation models and strategies, companies’ digital transformation, the opportunities that arise due to the exponential growth of technology and the effect that this has on people, society, the job market and education.