Artificial intelligence in education: issues of justice

In this text, Simon Collin and Emmanuelle Marceau discuss the integration of AI in education. According to them, it is becoming necessary to support teaching staff in proactively taking into account the ethical and critical issues raised by AI.

Published on :

Posted in:

ATTENTION! The English translation is automated - Errors (sometimes hilarious!) can creep in! ;)

By Simon Collin, University of Quebec at Montreal (Canada) and Emmanuelle Marceau, Cégep du Vieux Montréal-CRDP (Canada)

Introduction: potential of artificial intelligence for education

Artificial intelligence (AI) has aroused increasing educational and scientific interest over the past thirty years, which has accelerated recently following the improvement in the technical performance of AI (Becker, 2018).

In their systematic review of the literature, Zawacki-Richter et al. (2019) identify four main applications of AI in higher education:

1 / profiling and prediction (eg admission to a study program, dropping out of school);

2 / intelligent tutoring systems (eg teaching of educational content, feedback);

3 / measurement and evaluation (eg, automatic grading, school engagement);

4 / and adaptive and personalized systems (eg, recommendation and selection of personalized content).

On the other hand, the ethical and critical issues raised by AI are little studied in higher education (Zawacki-Richter et al., 2019), and in education more broadly (Krutka, 2021). Wishing to contribute to this emerging reflection, we propose to address some ethical and critical issues of AI, without however claiming to be exhaustive, as well as to formulate some courses of action allowing them to be better taken into account, both from the point of view both in terms of design and use.

In doing so, it is important to keep in mind that the issues listed below are not, for the most part, specific to AI as they also arise for other technologies. What is more, they are found in other areas of society where AI is used. However, these issues tend to be amplified by current developments in AI and to be applied in a unique way in education, which in our view justifies the relevance of a reflection limited to AI in education.

Some ethical and critical issues of AI in education

The ethical and critical issues raised by AI in education are multiple and have diverse origins. A first type of issue is linked to the massive data that AI requires, which can induce possible biases and raise the question of respect for the privacy of students and school personnel (Andrejevic et al., 2020; Perrotta et al. ., 2020). Krutka (2021) takes the example of Google's education suite, which collects data without the free and informed consent of students and school staff (at odds with their own policies and those of provinces and states) and uses it in ways opaque. The data of students and school staff are therefore used without their knowledge, thus causing a breach of their privacy.

In addition, AI is mainly produced by private companies rather than by educational bodies (Williamson et al., 2020; Selwyn et al., 2020), and mainly studied by researchers in computer science or science, technology , engineering and mathematics rather than by researchers in educational sciences (Zawacki-Richter et al., 2019). This situation generates a second type of ethical and critical issue relating to the expertise and educational representations mobilized by the design teams.

Apart from education, several studies have already highlighted the lack of diversity within design teams, which results in representativeness biases ranging from the under-representation of certain social groups to their discrimination, stigmatization or exclusion. In 2015, the Google photos algorithm associated a photo of two black American people with the tag “gorillas”, for lack of sufficient training in identifying dark-skinned faces (Plenke, 2015). .

Finally, the increasing automation of AI means that the latter is able to take charge of an increasing share of the educational actions that usually fall to students and school staff (Selwyn, 2019). Another type of ethical and critical issue then arises relating to the autonomy and professional judgment of teachers, as well as to the agency of students according to the distribution of tasks between them and the AI.

As an example, let us cite the case of behavior management systems reported by Livingtsone and Sefton-Green (2016). Behavior management systems allow teachers to document harmful student behavior, which is then compiled and automatically reported to school administration for proportional consequences.

Due to a lack of time in the classroom, some teachers document behavior after class, sometimes without informing the students concerned. they do not remember, which undermines the very principles of consistency and justice in education.

Preventing ethical and critical issues in AI: from design to use

From these types of ethical and critical issues, it is possible to outline some avenues for reflection and action. First of all, these issues should be taken into account from the design phase, in order to prevent possible negative consequences during use as much as possible. We can then ask the following question: to what extent do design teams integrate expertise and educational representations when they develop technologies involving AI? And to what extent are these educational expertise and representations representative of the diversity and uniqueness of Quebec school environments?

A first step in ensuring this is for design teams to opt for “user-centric” models (eg, Labarthe, 210) in order to maximize the consideration of expertise and educational representations. and to preserve the educational purpose of the economic and technical ones. A complementary step is to adopt and respect ethical design principles, such as systematically and explicitly informing users when they are interacting with an artificial intelligence system.

In terms of use, making students and school staff aware of the challenges of AI in education involves integrating an explicit ethical and critical dimension into technology training. To be complete, this dimension would benefit from not being limited to the “good uses” of AI, but by focusing on understanding the interactions between the design and use of AI on the one hand, and between uses and their educational and social implications on the other hand.

For example, the technoethical model of Krutka et al. (2019) opens an interesting path in initial and in-service teacher training: to determine whether a given technology is ethical, it offers an analysis of the ethical, legal, democratic, economic, technological and educational dimensions, guided by questions, as well as elements to consider and practical applications to integrate into teacher training.

Not to conclude

The integration of AI in education is relatively recent, so the operationalization of its potential remains largely to come. To guide it, it seems necessary to us to accompany it with a proactive taking into account of the ethical and critical issues raised by AI, by anchoring the latter within the framework of a reflection on justice. As such, ethics training for school personnel deserves to be put forward in order to best equip them to intervene and interact in a rapidly changing world.


  • Andrejevic, M., and Selwyn, N. (2020). Facial recognition technology in schools: critical questions and concerns. Learning, Media and Technology, 45 (2), 115–128.
  • Becker, Brett. (2018). Artificial Intelligence in Education: What is it, Where is it Now, Where is it Going? In B. Mooney. Ireland's Yearbook of Education (pp. 42-46). Dublin: Education Matters.
  • Krutka, DG, Heath, MK, and Staudt Willet, KB (2019). Foregrounding technoethics: Toward critical perspectives in technology and teacher education. Journal of Technology and Teacher Education, 27 (4), 555–574
  • Krutka, DG, Smits, RM and Willhelm, TA (2021) Don't Be Evil: Should We Use Google in Schools ?. TechTrends.
  • Labarthe, Fabien, 2010, “Design and SHS in the user-centric innovation process: what reciprocal contributions? Escapes, 2, 14–25.
  • Perrotta, C., and Selwyn, N. (2020). Deep learning goes to school: toward a relational understanding of AI in education. Learning, Media and Technology, 45 (3), 251–269.
  • Plenke, M. (2015). Google just misidentified 2 African-Americans in the most racist way possible. Mic. Retrieved April 8, 2021 from:
  • Selwyn, N. (2019). Should Robots Replace Teachers? AI and the Future of Education. Cambridge: Polity Press.
  • Selwyn, N., and Ga ević, D. (2020). The datafication of higher education: discussing the promises and problems. Teaching in Higher Education, 25 (4), 527–540.
  • Williamson, B., and Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45 (3), 223–235.
  • Zawacki-Richter, O., Marín, VI, Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education - where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 1-27.

To cite this article

© Authors. This work, available at, is distributed
licensed under the Creative Commons Attribution 4.0 International license

Your comments

To comment on this topic and add your ideas, we invite you to follow us on social networks. All articles are published there and it is also possible to comment directly on Facebook, Twitter or LinkedIn.

Do you have news to share with us or would you like to publish a testimonial?

Publicize your educational project or share your ideas via our Opinion, Testimonials or Press Releases sections! Here's how to do it!

Receive the Weekly Newsletter

Get our Info #DevProf and l'Hebdo so you don't miss out on anything new at École branchée!

About the Author

Collaboration spéciale
Special collaboration
École branchée broadcasts texts from actors in the educational community. You can contribute too! Take the opportunity to share your ideas, talk about an educational project experienced in class, etc. Find the details in the menu About / Submit an article.

You might also like: