The Edteq Association held a conference on Wednesday, October 5 on the ethical aspect of artificial intelligence in education. The conference was moderated by Carolanne Tremblay, director of the techno-pedagogical sector of the Dalia platform at Optania and Simon Collin, professor at the Faculty of Education of the Université du Québec à Montréal, holder of the Canada Research Chair on Digital Equity in Education and researcher at the Centre de recherche interuniversitaire sur la formation et la pratique enseignante. Here is a summary.
Carolanne Tremblay is convinced that artificial intelligence (AI) in education aims to avoid cognitive overload for educational actors while ensuring a reduction in repetitive tasks. "Artificial intelligence in education must give back the time to school stakeholders to build caring relationships with learners and help them reach their full potential."
4 main fields of application
It identifies 4 major areas of practical application of AI in education:
- Predictive models
They are tools that help school stakeholders to preventively identify at-risk situations such as dropout or academic failure of learners. To do so, AIs generate a portrait of the student population in order to determine "patterns" of failure and dropout and to react before they occur.
- Adaptive learning platforms
They make it possible to estimate a learner's knowledge of a subject based on observable data. In addition, they allow for the adjustment of the learner's academic progress in order to work on his or her weaknesses and highlight his or her strengths. All of this aims to promote individualized teaching and lead to pedagogical differentiation.
An example of an adaptive platform: Adapt Coaching Actuaries
- Conversational robots (chatbot)
These tools are able to provide support to learners and staff members regardless of the day or time. They facilitate contact with youth and are designed to meet a specific need.
Two examples of conversational robots: FLO from Alloprof and ALI from Optania.
- Real-time statistical analysis tools
Intended for educational personnel in direct contact with students, these tools allow them to draw a portrait of the learning situation in real time. All of this is done with a view to carrying out punctual interventions as soon as a situation is likely to worsen.
An example of an analysis tool: Mozaïk-Portal which provides staff members with an active monitoring module about students in real time.
A look at the ethical aspect
In the second part of the presentation, Simon Collin brought a look at the ethical aspect related to the use of artificial intelligence systems (AIS). Indeed, Mr. Collin advocates the integration of ethical issues from the conception of an AIS in order to avoid any risk of problems later on.
It identifies 6 sources of tension that may arise when integrating AIS in education:
- The complexity of educational situations / technical standardization
Standardization could lead to an industrialization of education or to a lack of adaptability of AIS to educational situations.
- The agentivity of educational actors / technical automation
Automation is likely to lead to a decrease in the agentivity of educational actors through an increased framing of their roles. Moreover, AIS could impose content or pedagogy that may not be appropriate to the student or his or her learning style.
- School justice / technical rationality
Although successful, AISs may lack consideration for the well-being of educational stakeholders. There is also reportedly a lack of diversity in AIS design teams and a lack of consideration for equity, diversity, and inclusion in education.
- School governance / technical design
There is a lack of clarity about the moral and legal responsibility of design teams, educational actors, and institutions. In addition, ambiguity about data ownership can arise.
- Human intelligibility / AI opacity
It is complicated to explain the results of an AIS. Simon Collin also points out the lack of transparency regarding the interpretation and use of the results generated by AIS. Finally, the lack of user confidence in AIS can bias the output.
- Dignity of educational actors / data exploitation
A problem of consent is generated insofar as in some cases there is an impossibility to withdraw consent. Simon Collin then talks about the collection of too much or too sensitive data by AIS that are not as reliable and robust as one might think.
Simon Collin advocates the involvement of all actors (designers, researchers, educational actors...) for a better consideration of the ethical issues of AIS in education.