Ethical Leadership in AI-Enabled Schools: Navigating Human Rights, Equity, and Justice in Education
Article Main Content
The integration of artificial intelligence (AI) into primary and secondary education presents both pedagogical opportunities and ethical challenges. This study examines how ethical leadership can guide the use of AI in compulsory schooling. Drawing on international legal instruments, including the United Nations Convention on the Rights of the Child (UNCRC), the European Convention on Human Rights (ECHR), the European Union’s Charter of Fundamental Rights and the Council of Europe’s Charter on Education for Democratic Citizenship and Human Rights Education (EDC/HRE), within the spirit of the EU’s AI Act, the first binding worldwide horizontal regulation on AI, this study adopts a normative human rights framework. It analyzes critical issues such as data privacy, algorithmic bias, equity, and democratic accountability.
The study develops a six-pillar model of ethical leadership that integrates legal obligations, ethical reasoning, and pedagogical priorities. This model offers school leaders a principled and contextually responsive approach to AI governance in education, ensuring that digital innovation supports the justice, inclusion, and rights of all the learners.
Introduction
The EU’s Regulation (EU) 2024/1689 (or Artificial Intelligence Act), the first binding worldwide horizontal regulation laying down harmonised rules on artificial intelligence, has provided a definition of AI system which refers to “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (Article 3, Chapter 1)11Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), published on 12 July 2024, entered into force in August 2024 and will be fully applicable 24 months after entry into force. However, Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force, i.e., after August 2025. Accessible from: http://data.europa.eu/eli/reg/2024/1689/oj. Artificial intelligence is transforming compulsory education, with tools like predictive analytics, adaptive learning systems, and automated grading now widespread in schools. These technologies offer personalisation and efficiency but also raise ethical concerns, including privacy breaches, algorithmic bias, and diminished human interaction. AI literacy (AILit) has become essential for ethical leadership in this context as it refers to the technical expertise, durable skills, and forward-thinking attitudes necessary to succeed in an AI-driven world, enabling users to combine AI’s engagement, management, creation and design, with critical and ethical evaluation of its advantages, risks, and implications (OECD, 2025). Thus, the development of an AI Literacy Framework for Primary and Secondary Education (AILit Framework) is directed to equipping educators and learners with the ability to critically understand AI’s risks and opportunities, which is key to ethical decision-making (OECD, 2025). Ethical leaders must ensure that all stakeholders, especially teachers and students, develop the literacy needed to engage responsibly with AI.
“Ethical leadership” is broadly defined as a leadership style grounded in moral responsibility, integrity, fairness, and concern for others (MacIntyre, 1984; Brown & Treviño, 2006). Particularly in artificial intelligence (AI) “Ethical leadership” can be approached as the practice of guiding the development, regulation and implementation of AI technologies by ethical principles such as fairness, transparency, accountability and protection of human rights (Chayanusasanee Jundonet al., 2025). In compulsory education, which encompasses primary and secondary schooling, “Ethical Leadership” assumes added complexity. Children are uniquely vulnerable, because of their diminished psychosomatic and legal status (Michopoulou, 2023b), and schools involve multiple stakeholders with conflicting interests. The introduction of AI heightened these tensions, demanding not only technical proficiency but also strong ethical judgement from school leaders. Foundational models such as transformational and servant leadership remain instructive. Transformational leadership promotes shared moral vision and collective ethical growth (Bass, 1985; Leithwood & Jantzi, 2006). Servant leadership centers empathy and prioritizes the needs of students and educators (Greenleaf, 1977). However, these models must now be adapted to the socio-technical challenges posed by AI.
Simultaneously, concerns such as algorithmic opacity, surveillance, and bias require leadership that is both ethically sound and legally informed. Leaders must uphold children’s rights and educational equity through frameworks that address both social and technological dimensions (Harry, 2023; Michopoulou, 2023a, 2025). Polatet al. (2025) argue that ethical AI leadership involves risk assessment, social impact awareness, and accountability mechanisms. Shapiro and Stefkovich’s (2001) four paradigms of educational ethics are helpful in navigating this terrain. These levels cover aspects such as (i) the ethic of justice that emphasizes fairness, equity, and equality, warning that AI must not reproduce discrimination (Floridiet al., 2018; Dastin, 2018, October 10; Michopoulou, 2025); (ii) the ethic of care that stresses empathy and individual attention, especially regarding psychological safety and inclusion (Gilligan, 1982; Buolamwini & Gebru, 2018); (iii) the ethic of critique which interrogates power and cultural exclusion, particularly relevant for AI tools developed without a local context (Eguchiet al., 2021; Harry, 2023); and (iv) the ethic of profession, which highlights legal and institutional standards, including the EU’s General Data Protection Regulation (GDPR22Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation), which is applied since 25/05/2018. Accessible from: http://data.europa.eu/eli/reg/2016/679/oj ), and mandates for inclusive education (Paul, 2024; Polatet al., 2025). These ethical lenses are not mutually exclusive, but leadership should integrate them through reflective and structured training (Eyalet al., 2011).
The practical application of these principles is especially relevant with AI tools such as behavior monitoring and automated grading. Here, leaders must evaluate the trade-offs between efficiency and core values such as trust, fairness, and student well-being. Empirical research has supported such a holistic approach. Eguchiet al. (2021) found that culturally contextualised AI pedagogy fosters civic and ethical engagement. Ottenbreit-Leftwichet al. (2023) identified three dimensions of teacher readiness: AI literacy, social impact awareness, and strategies to nurture critical thinking. These findings align with the AILit Framework, which combines technical knowledge, ethical reasoning, and critical reflection as integral to AI literacy in schools (OECD, 2025), as well as with the EU AI Act, which promotes a human-centered and risk-based approach to the adoption of AI systems. These studies underscore the role of leaders in building institutional capacity. Harry (2023) warns that uncritical AI adoption can destroy educators and marginalize professional judgement. She calls for leadership that is inclusive and grounded in transparency and community values. To that end, Michopoulou (2023a) argues that under international human rights instruments, such as the Universal Declaration of Human Rights, the International Covenant on Economic, Social and Cultural Rights (ICESCR), and the UN Convention on the Rights of the Child (UNCRC), which set international standards, governments and schools share responsibility for introducing technological tools when redesigning educational systems with a view to ensuring respect for civic rights and human dignity, empowerment for children to develop their personality, and democratic consciousness.
Thus, ethical leadership in AI-enhanced education transcends legal compliance. It is a relational and normative practice that draws on philosophical ethics, regulatory mandates, and lived realities of teachers and students. Education must be seen not as service delivery but as a collective endeavor committed to justice, equality, autonomy and respect for democratic principles (Michopoulou, 2025). Polatet al. (2025) echo this by advocating moral and institutional accountability as the foundation of ethical school leadership in the AI era. This conceptual foundation sets the stage for thematic analysis as follows. Leaders must critically assess the role of AI, and align its use with human rights, democratic values, and inclusive pedagogical aims. Through the integration of legal, ethical, and educational principles, ethical leadership can reframe AI as a means of empowering and cultivating democratic citizenship in schools. This is the purpose of this study: to prove that ethical leadership is central to ensuring that AI promotes educational justice and democratic values, and to provide school leaders with tools for principled AI integration to this goal.
Methods
This study employs a qualitative interdisciplinary methodology that synthesizes legal, ethical, and educational perspectives to examine how ethical leadership can guide the integration of AI in compulsory schooling. Reinforced by the empirical and conceptual literature, this study suggests a structured and original combinational interpretation of existing legal, normative, and policy sources to support school leaders when adopting principled approaches to AI integration while promoting human rights and democratic principles. The methodological approach consisted of three interrelated components:
First, it is adopted a thematic literature synthesis. In this sense, a narrative review was conducted based on academic articles, legal instruments, policy documents, and institutional reports published between 2018 and 2025. Sources were identified through targeted keyword searches on Scopus, JSTOR, ERIC, and Google Scholar using terms such as “ethical leadership,” “AI in education,” “ethical AI,” “child rights and technology,” “educational AI policy” and “education for democratic citizenship.” The selection criteria adopted prioritized publications that engaged directly with normative theory, educational leadership, AI governance, and legal frameworks applied to primary and secondary education.
The analysis is interpretive rather than empirical, consistent with constructivist orientation. The reviewed literature was categorized thematically into key domains, including ethical leadership principles, rights-based governance, equity and inclusion, institutional accountability, and democratic participation. These domains were used to organize the conceptual landscape of the study and guide the interpretation of the findings.
Second, it is conducted a comparative policy analysis. To illustrate cross-national variation in AI governance, this study includes a structured comparison of educational policies and implementation strategies across three jurisdictions: the United Kingdom, the European Union, and Singapore. Singapore was chosen as a leading Asian example and regional pioneer in the regulatory and policy-driven integration of AI into education. These jurisdictions were selected for their contrasting educational governance structures: the UK as a decentralized, market-oriented system, the EU as a region governed by harmonized rights-based regulations, and Singapore as a centralized model characterized by technological advancement and state-led implementation.
The comparison focused on four analytical dimensions: legal and policy frameworks, ethical and equity implications, teacher capacity and curricular integration, and data governance. These dimensions were derived inductively from the literature and reflect key areas in which leadership interventions influence AI integration outcomes. Our aim was not to evaluate these systems hierarchically but to draw critical insights into the ethical and structural conditions that affect inclusive AI deployment.
Third, a normative-ethical evaluation. This final component of the methodology applies an ethical interpretive lens, guided by the four levels of educational ethics proposed by Shapiro and Stefkovich (2001): justice, care, critique, and profession. These paradigms are used to assess leadership responses to ethical challenges in the adoption of AI in education. The evaluation also considers international legal frameworks, including the United Nations Convention on the Rights of the Child (UNCRC), the First Additional Protocol to the European Convention on Human Rights (ECHR), the International Covenant on Economic, Social, and Cultural Rights, and the European Union Artificial Intelligence Act. These instruments provide a binding legal basis for assessing how AI aligns with the principles of equity, transparency, and human dignity.
This methodological design is reinforced by empirical and conceptual literature. For example, jurisdictions such as the UK, the EU, and Singapore demonstrate distinct governance models. The UK’s fragmented approach often results in inconsistent practices, EU frameworks promote rights-based safeguards, and Singapore’s infrastructure enables rapid implementation, although often without sufficient ethical oversight. In addition, existing findings on “ethical AI” support the article’s central claim that ethical leadership is essential to ensure that AI promotes justice and inclusion rather than reinforcing systemic disparities.; Eguchiet al. (2021) highlight the importance of culturally responsive pedagogy and student agency, while Ottenbreit-Leftwichet al. (2023) call for co-designed, age-appropriate, and ethically informed AI curricula. Liet al. (2024) warned of the risks associated with the unregulated use of educational technologies, and Harry (2023) drew attention to the depersonalization of teaching when AI substitutes professional judgement.
By adopting such a “triangular” methodological approach, that is, thematic literature synthesis, comparative policy analysis, and normative-ethical evaluation, the present study constructs a principled and contextually grounded framework for evaluating the ethical integration of AI in compulsory education and proposes a model of six interdependent pillars of ethical AI leadership for the implementation of this framework. This triangulated methodology allows for a comprehensive understanding of how legal standards, ethical theory, and governance models can intersect in designing leadership practices that advance equity, protect rights, and uphold democratic values in educational settings.
The findings presented in the following section are structured according to the thematic domains identified through this triangulated methodology, illustrating how ethical leadership can be operationally applied across jurisdictions.
Findings
Thematic Literature Synthesis and Evaluation
Regarding the use of AI in education, such technologies are increasingly embedded in compulsory K–12 education through applications such as automated grading systems, behavior-monitoring platforms, predictive analytics, and adaptive learning software. Tools like Google Teachable Machine, Scratch, and Machine Learning for Kids are now common in classrooms, especially within project-based and game-based learning contexts (Liet al., 2024). These platforms aim to enhance interactivity, improve assessment efficiency, and offer support that is tailored to diverse learner profiles. AI offers significant benefits when implemented effectively. It can streamline operations, personalize learning experiences, and assist teachers in monitoring their progress. By enabling differentiated instruction and the early identification of at-risk students, AI allows educators to concentrate on relational and responsive teaching practices. Such applications can foster more inclusive and dynamic learning environments, particularly when integrated into pedagogical models that value creativity and iterative learning.
Despite these advantages, AI integration raises critical ethical and pedagogical concerns. As Liet al. (2024) observed, many AI projects in schools are short-lived and dependent on infrastructures that remain inaccessible in resource-limited settings. Interfaces may be poorly designed for certain age groups, thus undermining their pedagogical value. Educators often report insufficient training in AI literacy, leaving them unable to explain algorithmic processes or evaluate the implications of these tools. Without ethical oversight, AI risks reinforce existing inequalities, privilege efficiency over learning quality, and encourage surveillance-based schooling cultures.
Pedagogical responses should focus on ethical reflexivity and cultural responsiveness. Eguchiet al. (2021) Contextualising AI (CAI) project illustrates how engaging students with locally relevant AI themes—such as facial recognition and racial bias—can deepen understanding and ethical awareness. By linking abstract AI concepts to lived experiences, the students critically examined both technical mechanisms and their broader societal implications. Ottenbreit-Leftwichet al. (2023) propose a multidimensional framework for teacher preparedness. This comprises conceptual AI literacy, awareness of the social and ethical implications of technology, and pedagogical strategies that support critical thinking and student empowerment. Their research suggests that, in the absence of robust institutional investment in teacher development, AI implementation risks becoming a technocratic exercise detached from educational values. Furthermore, Harry (2023) cautioned against the deskilling of teachers due to overreliance on automated tools. She advocated empowering educators through co-leadership in AI decision making. In this context, ethical leadership refers to fostering emotional intelligence, reflective judgement, and shared accountability in the AI adoption processes.
As such, at the institutional level, ethical leadership must embed AI not only into the curriculum but also into the governance structures and value systems of schools. Culturally inclusive and ethically aware frameworks can guide the selection, deployment, and evaluation of AI tools. For learners from marginalized backgrounds, access to ethical AI education is not merely about acquiring digital skills. It represents a pathway to agency, justice, and recognition in an increasingly digital society.
Human Rights and Legal Considerations
The right to education, as affirmed by international legal frameworks, including the UNCRC [Articles 28 and 29], CESCR [Article 13], the First Additional Protocol to the European Convention of Human Rights (ECHR) [Article 2(1)], and the European Union’s (EU) Charter of Fundamental Rights [Article 14], mandates equitable and non-discriminatory access to learning. However, the uneven integration of AI into schools threatens the widening of the existing disparities. Students in under-resourced regions often lack access to digital infrastructure, while AI systems trained on limited datasets can reproduce bias, undermining the principle of equal treatment (Liet al., 2024; Berendtet al., 2020).
The adoption of AI in education must align with the legal imperative of non-discrimination. As emphasized by Michopoulou (2025), equality is not aspirational but a binding requirement of inclusive schooling that defines the full content of the right to education, as “right to inclusive and equitable education.” Nevertheless, most AI policies remain weak in terms of enforcing safeguards. Instruments such as the GDPR, while offering a foundational privacy framework, fail to address educational concerns such as long-term profiling or the suppression of learner autonomy (Berendtet al., 2020). Children’s privacy and autonomy are especially at risk, because many AI tools collect significant personal data with limited transparency or consent mechanisms. Educators, often untrained in data ethics, may deploy systems that infringe on student rights (Ottenbreit-Leftwichet al., 2023). Harry (2023) highlights the dangers of commercial partnerships that prioritize surveillance over student welfare, while Michopoulou (2025) affirms children’s right to challenge technologies that erode their dignity.
To promote legal coherence and civic accountability, Holmeset al. (2023) advocated binding regulatory standards informed by the Council of Europe principles. They called for policies grounded in human dignity, with robust accountability systems and transparent oversight. Eguchiet al. (2021) supported integrating AI literacy into curricula to strengthen digital rights awareness, especially among young learners. However, structural inequality remains a barrier to inclusion. Gottschalk and Weise (2023) argue that equitable access to culturally relevant content and digital tools is fundamental. AI systems may marginalize students from low-income or remote settings, particularly when design processes reflect dominant cultural norms (Liet al., 2024). Ethical leadership must address these disparities through adaptive curricula, teacher training, and infrastructure partnerships. Eguchiet al. (2021) CAI project offers a model of culturally grounded pedagogy that enhances both relevance and ethical awareness.
Market-driven AI implementation risks further and excludes local perspectives. Harry (2023) warns of top-down deployments that bypass community inputs. Baker and Hawn (2022) showed how predictive systems can misclassify vulnerable students due to training on biased data. Fedeleet al. (2024) propose tools such as the Assessment List for Trustworthy Artificial Intelligence (ALTAI) to audit systems for fairness, transparency, and representational justice. Thus, participatory governance is essential. Fedeleet al. (2024) stressed that inclusion should entail both access and voice. Ethical leadership should embed this dual commitment in policy and practice, ensuring that AI not only reaches all students but also respects their agency, rights, and diverse identities.
Equity Before Inclusion
Equity must underpin all the efforts to integrate AI into education. Addressing systemic inequalities in infrastructure, access, and pedagogical relevance is a prerequisite for any meaningful inclusion initiatives. This is because equity guarantees a fair distribution of resources and opportunities. Consequently, without this foundation, diversity strategies risk being tokenistic, reinforcing rather than challenging entrenched disparities (Gottschalk & Weise, 2023).
Moreover, ethical leadership entails expanding access to AI tools and embedding culturally responsive content that reflects the realities of diverse learners. In underresourced regions, inadequate digital infrastructure and pedagogical misalignment with AI tools developed elsewhere lead to exclusion. These systems, often trained on homogenous datasets, can marginalize students by reproducing cultural bias and misclassifying minority or multilingual learners (Liet al., 2024; Baker & Hawn, 2022). To mitigate such harms, ethical leaders must adopt a proactive approach: auditing AI tools for fairness using instruments like the ALTAI checklist, involving educators and communities in design processes, and equipping teachers with the knowledge to identify and address algorithmic bias (Fedeleet al., 2024). Training and support are particularly urgent in rural and low-income areas, where educators often lack resources to evaluate impact of AI or adapt it to student needs (Ottenbreit-Leftwichet al., 2023).
Effective responses require more than just redistribution. They demand intersectional awareness, promotion of student agency, and integration of pluralist perspectives into curricula and governance. Eguchiet al.’s (2021) CAI project exemplifies how ethical and locally grounded AI education can enhance both inclusion and civic engagement. Market-driven AI implementation presents additional challenges. Tools introduced without local consultation risk overlooking contextual realities and disempowering educators. Harry (2023) and Baker and Hawn (2022) cautioned against vendor-led deployments that bypassed community participation. Ensuring equity refers to institutionalizing participatory governance, transparent evaluation, and inclusive decision-making. Overall, true inclusion cannot be achieved without prior equity. Leaders must embed fairness in AI’s development, deployment, and oversight, ensuring continuous monitoring of discriminatory outcomes and facilitating opt-out provisions where necessary. Equity is not a supplement to inclusion, it is an essential condition.
Justice and Accountability
A central ethical challenge in AI-integrated education is accountability, that is who is responsible when AI systems adversely affect students? Predictive algorithms and behavioral analytics can misclassify learners; however, their inner workings are often opaque. Liet al. (2024) highlighted the prevalence of "black box" systems in schools, where neither educators nor students fully understand the rationale behind the AI-generated outcomes.
In this context justice requires transparency and clear governance. Schools must implement mechanisms to review AI-driven decisions and establish accessible redressing processes. Accountability must be distributed among developers, educators, and institutions. Eguchiet al. (2021) showed that when students are engaged critically with AI, they develop both technical and ethical insights. Their CAI project illustrated how learners can analyze facial recognition tools and question bias in predictive systems, enhancing ethical agency.
Transparency is crucial in empowering teachers. Ottenbreit-Leftwichet al. (2023) reported that many educators are compelled to rely on AI tools without sufficient understanding, undermining their capacity to intervene when harm occurs. They advocate embedding ethical literacy into professional development so that teachers can identify and address algorithmic biases. These competencies must be supported by institutional accountability frameworks. Educational leaders should define roles and responsibilities, establish grievance mechanisms, and require transparency from AI vendors, as ethical leadership entails a proactive posture, anticipating risks, mitigating harms, and ensuring fair treatment in all AI-mediated educational processes. Harry (2023) emphasized the need for anticipatory accountability arguing that passive adoption of AI tools implicates educators in systemic injustices. Her analysis supported embedding checks and balances, inclusive design practices, and meaningful student participation in AI governance. This is further examined by Holmeset al. (2023), who argue that accountability must extend to public authorities. Their legal perspective calls for enforceable frameworks capable of auditing, tracing, and rectifying AI-related harm which include rigorous vetting, transparent complaint systems, and statutory protections for learners.
Additionally, the principles of scrutiny are essential. AI systems should be understandable to educators, students, and guardians, not just to developers. This transparency fosters trust and upholds the pedagogical integrity of technological use. According to the Council of Europe’s Expert Group, as cited by Holmeset al. (2023), member states should mandate clear legal standards for AI in education. These include procedural safeguards, participatory oversight, and protection of learner rights.
Consequently, Justice in AI education must be embedded in both development and classroom use. To this end, ethical leadership means aligning implementation with legal norms and moral commitments to protect and empower all students, ensuring that AI serves as a tool for inclusion, rather than exclusion.
Democratic Citizenship and Ethical AI
Education for Democratic Citizenship (EDC) and Human Rights Education (HRE) are foundational to compulsory schooling and are supported by international legal frameworks such as the UNCRC, ICESCR and the Council of Europe’s Charter on Education for Democratic Citizenship and Human Rights Education (EDC/HRE). As AI becomes more embedded in educational practices, educational leadership must ensure that democratic values remain central to its adoption.
Schools function as democratic microcosms. Bäckman and Trafford (2007) argued that inclusive and participatory governance are essential features of a democratic school culture. Applying this to AI involves students, educators, and parents in technology-related decisions, rather than relying on top-down mandates. Transparency is central to this vision. AI systems used for behavioral surveillance, algorithmic grading, and predictive analytics must be scrutinized not only for technical efficacy but also for their alignment with democratic principles. If implemented without ethical safeguards, these tools risk undermining student autonomy, trust, and civic engagement (Herrera-Poyatoset al., 2025; Zidouemba, 2025).
Participatory governance is critical. Eddeboet al. (2025) warn that technology can become self-legitimizing when not grounded in public accountability. Ethical AI must be co-developed with those it affects, ensuring that systems reflect diverse perspectives and support educational aims rather than subverting them. Zidouemba (2025) cautioned that algorithmic nudging may suppress freedom of thought and critical engagement, whereas Herrera-Poyatoset al. (2025) emphasized explainability and institutional oversight as mechanisms for trust-building. Michopoulou (2023a) argued that civic literacy and critical thinking are legal obligations rather than pedagogical luxuries. Therefore, leaders must ensure that AI tools support, rather than replace, the dialogic and reflective practices essential to democratic education. Institutionalizing these values requires governance structures that are transparent, inclusive, and aligned with the public interest and the rule of law. Thus, adopting a democratic governance policy/structure/process, ensures that introducing AI methods will enable students’ participation and autonomy and cultivation of democratic identity to prepare them for an active role in a free society. Furthermore, Holmeset al. (2023) advocated regulatory frameworks that position schools as civic institutions accountable for promoting rights-based AI adoption.
In summary, democratic leadership in AI education demands more than compliance. This requires intentional strategies that embed EDC and HRE into policy, foster participatory governance, and uphold transparency as a non-negotiable standard. When guided by these principles, AI can enhance rather than erode the civic missions of education.
Comparative Ethical and Policy Dimensions of AI Integration in K–12 Education
The integration of artificial intelligence in K–12 education presents divergent regulatory, ethical, and pedagogical challenges across national contexts. Comparing AI governance in the United Kingdom, the European Union, and Singapore, commonalities and critical contrasts in the four key aspects are highlighted.
Legal and Policy Frameworks: The United Kingdom adopts a decentralized and fragmented approach. While frameworks such as the Children’s Code exist, their implementation is inconsistent, with local authorities bearing responsibility without sufficient support (Felix & Webb, 2024). In contrast, the EU benefits from legal instruments such as the EU Charter of Fundamental Rights and Regulation (EU) 2024/1689 of the European Parliament and Council of June 13, 2024 (Artificial Intelligence Act), which is the first binding worldwide horizontal regulation on AI laying down harmonized rules on AI, entered into force in August 2024. Singapore’s model is centralized and infrastructure-rich, enabling consistent implementation, although ethical oversight remains underdeveloped (Dabbaghet al., 2025).
Ethical and Equity Considerations: Each system grapples with distinct ethical tensions. In the UK, the 2020 Ofqual grading controversy exposes systemic inequities and the justice implications of opaque algorithms (Denes, 2023). European countries address equity through inclusive pedagogies and legal commitments to digital rights (Sampsonet al., 2025). Singapore’s emphasis on academic stratification has raised concerns regarding AI-driven hierarchies and fairness (Tan, 2024).
Teacher Capacity and Curriculum Integration: Common across all contexts is the gap between teacher agency and ethical literacy. Educators in the UK face vendor-led implementations with limited influence or preparation (Felix & Webb, 2024). European models prioritize participatory curriculum design and professional development with an ethical focus (Bellaset al., 2023). Singaporean teachers, while technically proficient, lack structured training in the normative dimensions of AI (Dabbaghet al., 2025). The curricular alignment also varies. AI use in UK schools is commercially driven and pedagogically inconsistent (Mintzet al., 2023). European programs such as AI Erasmus+ demonstrate more integrated and inclusive use. In Singapore, AI is embedded in subjects such as language acquisition but risks overreliance without pedagogical mediation (Wenet al., 2024).
Data Governance and Transparency: Data ethics remains challenging. The UK’s Children’s Code is inadequate for addressing education-specific risks (Felix & Webb, 2024). It took effect on September 2, 2020, and was published to be consistent with GDPR; however, it is under review due to the Data (Use and Access) Act, which came into law on June 19, 2025 and introduces new requirements when an online service is provided by organizations that are addressed to children or can to be used by them. The EU has strided in embedding data literacy and ethical use through curriculum-linked initiatives (Tedreet al., 2021). Singapore’s extensive data collection lacks corresponding safeguards for consent and transparency (Tan, 2024).
This comparative overview reveals the varying balances between innovation, regulation, and ethical depth. The UK exemplifies market-driven decentralization with equity risks, the EU advances coordinated and rights-based integration, and Singapore demonstrates operational efficiency but limited ethical governance. Across all contexts, effective AI adoption requires principled leadership, stakeholder engagement, and robust ethical frameworks that center inclusion and justice.
Discussion: Designing and Implementing a Framework for Ethical AI in Education
Strategic Framework for Ethical AI in Education
The integration of artificial intelligence into educational systems requires a coherent and ethically grounded strategic framework. This section outlines the foundational objectives of guiding school leaders and policymakers in cultivating a responsible AI ecosystem. The EU’s Artificial Intelligence Act sets a regulatory framework with a direct effect on its Member States, but it is left to them to adopt the necessary enforcement laws, ministerial decisions, circulars, etc. to realize the content of the EU’s Regulation.
Ethical Leadership for Principals and Policymakers
Ethical leadership requires the cultivation of school cultures rooted in dignity, democratic participation, and transparency. Educational leaders must be prepared to navigate AI’s ethical complexities. Professional development should incorporate ethical theory, legal frameworks, and inclusive approaches to enhance leadership in digital contexts (Holmeset al., 2023; Erümitet al., 2024). At the policy level, district-wide guidelines must uphold principles of privacy, equity, and cultural responsiveness (Holteret al., 2024).
Equitable Implementation of AI Governance
An ethically sound AI framework should prioritize human rights and address systemic inequalities. Ethical audits are critical for assessing equity impacts (Holmeset al., 2023). Policies must incorporate inclusivity at every stage, particularly in underrepresented populations (Bellaset al., 2023; Ministry of Education Singapore, n.d.). The formation of inclusive governance bodies and ethics committees ensures accountability and stakeholder representation.
Democratic and Rights-Based Education
Democratic education must extend beyond digital proficiency to encompass civic participation and ethical reflections. Democratic education favors equitable participation in a pluralistic environment of tolerance, diversity, inclusion and respect for human rights and democratic principles (Michopoulou, 2025). AI literacy should be framed as a democratic entitlement, enabling students to scrutinize technological implications (Bellaset al., 2023; Holmeset al., 2023). Including students in a curriculum co-design fosters inclusivity and strengthens ethical citizenship (Gousetiet al., 2025).
Operational Model: Pillars of Ethical AI Leadership
To implement this framework, we propose a model of the six interdependent pillars of ethical AI leadership. Though novel in synthesis, these pillars are grounded in the literature on ethical implementation, governance, and civic engagement (Holteret al., 2024; Ministry of Education Singapore, n.d.; Polatet al., 2025; Adamset al., 2023; Gousetiet al., 2025) taking also into consideration the existing legal framework and democratic principles enshrined in it. These are as follows:
Pillar 1: Rights-Based Foundation. AI governance must be aligned with international frameworks on child rights. The evaluations must consider privacy, discrimination, and autonomy. Due to the intrusive character of AI systems, children’s vulnerable positions and the imbalance of power in educational environments, the use of AI systems intended to be used to detect the emotional state of students could lead to detrimental or unfavorable treatment, and for this reason such a use should be prohibited. Children’s rights, as enshrined in Article 24 of the EU Charter of Fundamental Rights and the United Nations Convention on the Rights of the Child (UNCRC), require that the vulnerability of children and the safeguard of their well-being should guide the introduction of AI systems in education. EU’s AI Act classifies as “high-risk” the AI systems applied to evaluate student’s learning outcomes, to assess the appropriate level of education that students will receive or will be able to access determine affecting the educational course of a child’s life, or to monitor or detect prohibited behavior of students during tests, as these systems may violate the right to education and the right not to be discriminated against. In addition, applying the equality principle in education means substantial and quality education, accessible to all, regardless of the student’s background and status (Michopoulou, 2025). Thus, equitable access must be prioritized. This means that AI curricula must address systemic barriers and support marginalized communities through inclusive design and equitable access (Eguchiet al., 2021; Ottenbreit-Leftwichet al., 2023; Ministry of Education Singapore, n.d.). According to the AILit Framework, only 46% of students believe that their schools adequately prepare them for AI, and 49% worry that AI could widen academic gaps. This highlights the urgency for ethical leadership to embed AI literacy as a right and a means to address structural inequities (OECD, 2025).
Pillar 2: Ethical Decision-Making Framework. Leadership decisions should integrate justice, care, duty and utility. Practical tools include ethics scorecards, stakeholder logs, and risk-benefit assessments. Ethical theories such as deontology, utilitarianism, and virtue ethics guide decisions through principles of duty and moral law (Kant, 1785), the consequences of actions (Mill, 1863), and the character and virtues of the decision-maker (Solomon, 2003; Micewski & Troy, 2007). In parallel, when facing ethical dilemmas arising from the use of AI systems, ethical decision-making for education leaders requires the adoption of various perspectives, including an “ethic of care” which employs empathy and gives attention to individuality, and “ethic of critique” which questions any new methodological tool that may lead to exclusion and discrimination among pupils undermining school’s pluralistic and democratic orientation as well as their harmonious development of personality. In this framework, school principals, when incorporating AI methods in schools, should base their decisions on empathy and responsibility for the well-being of every student, trying to empower the weaker ones, while taking into consideration the values, beliefs, and desires of the community as essential stakeholders in decision-making. For example, tasks linked to identity and justice can promote critical engagement (Eguchiet al., 2021; Gousetiet al., 2025). Conversely, AI systems that use subliminal or deceptive techniques that impair students’ free will and ability to make informed decisions, manipule children’s decisions, constitute an intrusion into their personality, causing significant harm, impeding the development of children’s personality and democratic citizenship, and, consequently, violating the right to equitable quality education and its objectives (Michopoulou, 2023a, 2025).
Pillar 3: Transparency and Stakeholder Engagement. Transparency ensures that AI systems are traceable and explainable. In education, it means that students should be aware that they communicate or interact with an AI system, its capabilities and limitations and should be informed about how their rights may be affected by its use. Transparency rends parents’ accomplices in the import of AI tools, enhancing their trust in new educational methods based on AI applied to their children’s education. Since parents are the main stakeholders for the effectiveness of educational reforms, they can contribute to the success of the new method. Collaborative design further strengthens this participatory model; stakeholder co-design ensures contextual relevance, with teachers and students playing a central role in AI tool development and policy formulation (Ottenbreit-Leftwichet al., 2023; Adamset al., 2023). For example, transparent procedures and participatory governance through ethics panels, student councils, and parent forums are vital for fostering trust (Council of Europe, 2017; Adamset al., 2023). In addition, school leaders should prioritize explainable AI; teachers must be empowered to critically evaluate AI tools (Liet al., 2024; Eguchiet al., 2021; Ottenbreit-Leftwichet al., 2023), conduct regular assessments before integrating AI tools into their educational methods, or monitor their performance on students (Burgueño López, 2024). Furthermore, data governance and management practices should include, in the case of personal data, transparency regarding the original purpose of data collection, in the case of personal data. Ministries of Education are not void of obligations; they should be concerned with the transparency of the AI systems to be employed in education, asking the providers of such models to make publicly available a sufficiently detailed summary of the content or main data used for training the general-purpose AI model, and generally verifying the liability of the licence for the model, the elements of the model, and the process of its development, including the algorithm’s structure, the type and provenance of data and curation methodologies.
Pillar 4: Accountability and Oversight. Institutions must appoint ethics officers and independent review teams to oversee AI practices and to provide effective grievance mechanisms. For example, EU’s AI Act designates “European Data Protection Supervisor” as a competent market surveillance authority for ensuring an appropriate and effective enforcement of the Acts’ requirements by Union’s institutions and bodies, and “European Artificial Intelligence Board” empowered to facilitate consistency and coordination between national competent authorities of Member States.
Pillar 5: Professional Development and Ethical Culture. Training must include AI ethics, legal standards, and a critical pedagogy. Schools should foster a shared culture of ethical engagement (Holteret al., 2024; Mansouri, 2025). Ongoing professional training should emphasise ethical reasoning and interdisciplinary collaboration (Ottenbreit-Leftwichet al., 2023; Polatet al., 2025). This training should be grounded in the AI Literacy competences outlined in the AILit Framework, which emphasizes the integration of AI literacy into existing classroom practices and advocates for continuous professional development to support AI-informed pedagogy (OECD, 2025).
Pillar 6: Child-Centric and Caring Design. AI tools should support human relationships, allow opt-out options, and incorporate student and teacher input. Fairness in data and design is essential to prevent bias (Ministry of Education Singapore, n.d.). Children’s best interests as a universal principle enshrined in international documents for the protection of children’s rights, should be applied when evaluating the implementation of AI systems.
This model aligns with Creemers and Kyriakides’ (2006) dynamic model of educational effectiveness for promoting inclusive and high-quality interventions, as our model may enrich and evolves the eight factors of Teacher Effectiveness Research (TER): orientation (task), structuring (lesson), questioning techniques, teaching modelling, application (tasks), classroom climate (contribution of teacher), management of time, and teacher evaluation.
Conclusions
Ethical leadership is indispensable for navigating the opportunities and risks associated with AI integration into compulsory education. This article articulated a rights-based, six-pillar framework that positions ethical leadership as both a normative and practical imperative. Grounded in legal obligations and ethical reasoning, the model suggests a strategic approach to ensuring that AI supports equity, inclusion, and democratic citizenship. The study employed a qualitative, interdisciplinary methodology, combining thematic literature synthesis, normative legal interpretation, and comparative policy analysis across the UK, the EU, and Singapore. This approach enables a structured examination of how ethical leadership can align AI adoption with human rights and educational justice.
Nevertheless, this study has several limitations. Existing ethical leadership frameworks are often under-theorized in relation to emerging technologies and frequently lack mechanisms for enforceable accountability. Resistance to ethical leadership may also arise from entrenched market dynamics and policy environments that prioritise innovation and efficiency over justice and inclusion. Moreover, as the study is conceptual in nature, not including new empirical data or fieldwork, it offers a theoretical basis for an ethically grounded strategic framework on AI’s integration into educational systems, thus, its practical implementation remains to be assessed.
Future research should investigate the operationalization of ethical leadership frameworks in diverse educational settings through empirical and cross-jurisdictional studies. There is a pressing need for interdisciplinary studies that bridges education, law, technology, and ethics. As AI technologies continue to evolve, educational leadership must remain vigilant, adaptive, and committed to safeguarding the rights and dignity of all learners in a democratic environment, which cultivates democratic citizenship in schools and serves the EDC.
In other words, ethical leadership must not only manage technological change but also reshape its trajectory towards justice, equity, and collective flourishing.
References
-
Adams, C., Pente, P., Lemermeyer, G., & Rockwell, G. (2023). Ethical principles for artificial intelligence in K-12 education. Computers and Education: Artificial Intelligence, 4, 100–131. https://doi.org/10.1016/j.caeai.2023.100131.
Google Scholar
1
-
Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32, 1052–1092. https://doi.org/10.1007/s40593-021-00285-9.
Google Scholar
2
-
Bass, B. M. (1985). Leadership and Performance Beyond Expectations. New York: Free Press; Collier Macmillan.
Google Scholar
3
-
Bellas, F., Guerreiro Santalla, S., Naya, M., & Duro, R. J. (2023). AI curriculum for European high schools: An embedded intelligence approach. International Journal of Artificial Intelligence in Education, 33, 399–426. https://doi.org/10.1007/s40593-022-00315-0.
Google Scholar
4
-
Berendt, B., Littlejohn, A., & Blakemore, M. (2020). AI in education: Learner choice and fundamental rights. Learning, Media and Technology, 45(3), 312–324. https://doi.org/10.1080/17439884.2020.1786399.
Google Scholar
5
-
Brown, M. E., & Treviño, L. K. (2006). Ethical leadership: A review and future directions. The Leadership Quarterly, 17(6), 595–616. https://doi.org/10.1016/j.leaqua.2006.10.004.
Google Scholar
6
-
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html.
Google Scholar
7
-
Burgueño López, J. (2024). Implications of artificial intelligence in education. The Educator as Ethical Leader. Journal of Interdisciplinary Education: Theory and Practice, 6(2), 142–152. https://doi.org/10.47157/jietp.1505319.
Google Scholar
8
-
Bäckman, E., & Trafford, B. (2007). Democratic Governance of Schools. Council of Europe Publishing.
Google Scholar
9
-
Chayanusasanee Jundon, S., Niyomves, B., Kaewlamai, S., & Pawala, T. (2025). Ethical leadership and decision making in AI: Navigating educational management ethics in the digital age. Journal of Education and Learning Reviews, 2(3), 105–122. https://doi.org/10.60027/jelr.2025.1458.
Google Scholar
10
-
Council of Europe (2017). Learning to Live Together: Council of Europe Report on the State of Citizenship and Human Rights Education in Europe. Strasbourg: Council of Europe. Available at: https://www.coe.int/en/web/education/edc-hre-state-report-2017.
Google Scholar
11
-
Creemers, B. P., & Kyriakides, L. (2006). Critical analysis of the current approaches to modelling educational effectiveness: The importance of establishing a dynamic model. School Effectiveness and School Improvement, 17(3), 347–366. https://www.tandfonline.com/doi/abs/10.1080/09243450600697242.
Google Scholar
12
-
Dabbagh, H., Earp, B. D., Mann, S. P., Plozza, M., Salloch, S., & Savulescu, J. (2025). AI ethics should be mandatory for schoolchildren. AI and Ethics, 5, 87–92. https://doi.org/10.1007/s43681-024-00462-1.
Google Scholar
13
-
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/idUSKCN1MK0AG.
Google Scholar
14
-
Denes, G. (2023). A case study of using AI for GCSE grade prediction in a selective independent school in England. Computers and Education: Artificial Intelligence, 4, 100129. https://doi.org/10.1016/j.caeai.2023.100129.
Google Scholar
15
-
Eddebo, J., Lind, A. S., Hultin Rosenberg, J., & Wejryd, J. (2025). Artificial Intelligence, Democracy and Human Dignity. Uppsala Universitet. CRS Rapporter 4.
Google Scholar
16
-
Eguchi, A., Okada, H., & Muto, Y. (2021). Contextualising AI education for K-12 students through culturally responsive approaches. KI – Künstliche Intelligenz, 35, 153–161. https://doi.org/10.1007/
Google Scholar
17
-
s13218-021-00737-3.
Google Scholar
18
-
Erümit, A. K., Cebeci, Ü., Özmen, S., & Kalyoncu, F. (2024). Comparative analysis of the studies of countries on AI teaching. Journal of Computer Education, 3(1), 25–63. https://www.journalofcomputereducation.info/.
Google Scholar
19
-
Eyal, O., Berkovich, I., & Schwartz, T. (2011). Making the right choices: Ethical judgements among educational leaders. Journal of Educational Administration, 49(4), 396–413. https://doi.org/10.1108/09578231111146470.
Google Scholar
20
-
Fedele, A., Punzi, C., & Tramacere, S. (2024). The ALTAI checklist as a tool to assess ethical and legal implications for trustworthy AI in education. Computer Law & Security Review, 53, 105986. https://doi.org/10.1016/j.clsr.2024.105986.
Google Scholar
21
-
Felix, J., & Webb, L. (2024, January 23). Use of Artificial Intelligence in Education Delivery and Assessment. UK Parliament, POSTnote 712, https://doi.org/10.58248/PN712.
Google Scholar
22
-
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., Vayena, F. (2018). AI4People—an ethical frame-work for a good AI society: Opportunities, risks, principles, and recommendations. Minds & Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5.
Google Scholar
23
-
Gilligan, C. (1982). In a Different Voice: Psychological Theory and Women’s Development. Harvard University Press.
Google Scholar
24
-
Gottschalk, F., & Weise, C. (2023). Digital equity and inclusion in education: An overview of practice and policy in OECD countries, vol. 299, Paris: OECD Education Working Papers, No. 299, OECD Publishing, https://doi.org/10.1787/7cb15030-en.
Google Scholar
25
-
Gouseti, A., James, F., Fallin, L., & Burden, K. (2025). The ethics of using AI in K-12 education: A systematic literature review. Technology, Pedagogy and Education, 34(2), 161–182. https://doi.org/10.1080/1475939X.2024.2428601.
Google Scholar
26
-
Greenleaf, R. K. (1977). Servant leadership: A journey into the nature of legitimate power and greatness. New Jersey, US: Paulist Press.
Google Scholar
27
-
Harry, A. (2023). Role of AI in education. Injurity: Interdisciplinary Journal and Humanity, 2(3), 260–268. https://doi.org/10.58631/injurity.v2i3.52.
Google Scholar
28
-
Herrera-Poyatos, A., Del Ser, J., López de Prado, M., Wang, F. Y., Herrera-Viedma, E., & Herrera, F. (2025). Responsible artificial intelligence systems: A Roadmap to Society’s Trust through Trust-worthy AI, Auditability, Accountability, and Governance. arXiv, abs/2503.04739.
Google Scholar
29
-
Holmes, W., Stracke, C. M., Chounta, I.-A., Allen, D., Baten, D., Dimitrova, V., Havinga, B., Norrmen-Smith, J., & Wasson, B. (2023). AI and education: A view through the lens of human rights, democracy and the rule of law. Legal and organizational requirements. In N. Wang, G. Rebolledo-Mendez, V. Dimitrova, N. Matsuda, O. C. Santos (Eds.), Artificial intelligence in education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. AIED 2023. Communications in Computer and Information Science. Cham: Springer. https://doi.org/10.1007/978-3-031-36336-8_12.
Google Scholar
30
-
Holter, A., Rummel, A., & Skadsem, H. (2024, December 23). Building ethical AI usage in K–12 education. eSchool News. https://www.eschoolnews.com/digital-learning/2024/12/23/building-ethical-ai-usage-in-k-12-education/.
Google Scholar
31
-
Kant, I. (1785). Groundwork for the metaphysics of morals. In T. E. Hill, A. Zweig (Eds.). New York, NY: Oxford University Press.
Google Scholar
32
-
Leithwood, K., & Jantzi, D. (2006). Transformational school leadership for large-scale reform: Effects on students, teachers, and their classroom practices. School Effectiveness and School Improvement, 17(2), 201–227. https://doi.org/10.1080/09243450600565829.
Google Scholar
33
-
Li, L., Yu, F., & Zhang, E. (2024). A systematic review of learning task design for K-12 AI education. Computers and Education: Artificial Intelligence, 6, 100217. https://doi.org/10.1016/j.caeal.2024.100217.
Google Scholar
34
-
MacIntyre, A. (1984). After Virtue: A Study in Moral Theory. (2nd ed.). Notre Dame, Indiana: University of Notre Dame Press.
Google Scholar
35
-
Mansouri, F. (2025). Steering the future: Ethical leadership challenges in digital education. European International Journal of Philological Sciences, 5(1), 1–4. https://eipublication.com/index.php/eijps/article/view/2323/2171.
Google Scholar
36
-
Micewski, E. R., & Troy, C. (2007). Business ethics. Deontologically Revisited. Journal of Business Ethics, 72(1), 17–25. https://doi.org/10.1007/s10551-006-9152-z.
Google Scholar
37
-
Michopoulou, K. (2023a). The right of children for human rights education and education for democratic citizenship as a state obligation for sustainable democracy. Coventry Law Journal, 28(1), 15–21. https://publications.coventry.ac.uk/index.php/clj/article/view/1019/1009.
Google Scholar
38
-
Michopoulou, K. (2023b). Child labour as a form of labour exploitation: The role of states and the global business community in pursuing a sustainable future. Cov L.J, 28(2), 19–25.
Google Scholar
39
-
Michopoulou, K. (2025). The principle of equality in education: Exploring the legal aspects of the right to inclusive and equitable quality education for achieving the development of a democratic citizenship. European Journal of Education and Pedagogy, 6(2), 64–69. https://doi.org/10.24018/ejedu.2025.6.2.940.
Google Scholar
40
-
Mill, J. S. (1863). Utilitarianism. London: Parker, Son, and Bourn.
Google Scholar
41
-
Ministry of Education Singapore (n.d.). AI in education ethics frame-work. Retrieved June 9, 2025. https://www.learning.moe.edu.sg/aiin-sls/responsible-ai/ai-in-education-ethics-framework/.
Google Scholar
42
-
Mintz, J., Holmes, W., Liu, L., & Perez-Ortiz, M. (2023). Artificial intelligence and K-12 education: Possibilities, pedagogies and risks. Computers in the Schools, 40(4), 325–333. https://doi.org/10.1080/07380569.2023.2279870.
Google Scholar
43
-
OECD (2025). Empowering Learners for the Age of AI: An AI Literacy Framework for Primary and Secondary Education (Review draft). Paris: OECD. Available at: https://ailiteracyframework.org.
Google Scholar
44
-
Ottenbreit-Leftwich, A., Glazewski, K., Jeon, M., Jantaraweragul, K., Hmelo-Silver, C. E., Scribner, A., Lee, S., Mott, B., & Lester, J. (2023). Lessons learned for AI education with elementary students.
Google Scholar
45
-
International Journal of Artificial Intelligence in Education, 33, 267–289. https://doi.org/10.1007/s40593-022-00304-3.
Google Scholar
46
-
Paul, R. (2024). The ethical limits of AI in recruitment. European Journal of Data Protection Law, 9(1), 17–33.
Google Scholar
47
-
Polat, M., Karata¸s, I.H., & Varol, N. (2025). Ethical AI in educational leadership: Literature review and bibliometric analysis. Leadership and Policy in Schools, 24(1), 46–76. https://doi.org/10.1080/15700763.2024.2412204.
Google Scholar
48
-
Sampson, D., Kampylis, P., Moreno-León, J., & Bocconi, S. (2025). Towards high-quality informatics K-12 education. Smart Learning Environments, 12(14), 1–30. https://doi.org/10.1186/s40561-025-00366-5.
Google Scholar
49
-
Shapiro, J. P., & Stefkovich, J. A. (2001). Ethical Leadership and Decision Making in Education: Applying Theoretical Perspectives to Complex Dilemmas. New Jersey, US: Lawrence Erlbaum Associates Publishers.
Google Scholar
50
-
Solomon, R. C. (2003). Victims of circumstances? A defense of virtue ethics in business. Business Ethics Quarterly, 13(1), 43–62. https://doi.org/10.5840/beq20031314.
Google Scholar
51
-
Tan, C. (2024). An analysis of attainment grouping policy in Singapore. British Educational Research Journal, 51, 394–415. https://doi.org/10.1002/berj.4080.
Google Scholar
52
-
Tedre, M., Toivonen, T., Kahila, J., Vartiainen, H., Valtonen, T., Jormanainen, I., & Pears, A. (2021). Teaching machine learning in K-12 classroom: Pedagogical and technological trajectories for
Google Scholar
53
-
artificial intelligence education. IEEE Access, 9, 110558–110572.
Google Scholar
54
-
https://doi.org/10.1109/ACCESS.2021.3097962.
Google Scholar
55
-
Wen, Y., Chiu, M., Guo, X., & Wang, Z. (2024). AI-powered vocabulary learning. British Journal of Educational Technology, 56(2), 734–754. https://doi.org/10.1111/bjet.13537.
Google Scholar
56
-
Zidouemba, M. T. (2025). Governance and artificial intelligence: The use of artificial intelligence in democracy and its impacts on the rights to participation. Discover Artificial Intelligence, 5(12), 1–11. https://doi.org/10.1007/s44163-025-00229-5.
Google Scholar
57
Most read articles by the same author(s)
-
Konstantina Michopoulou,
The Principle of Equality in Education: Exploring the Legal Aspects of the Right to Inclusive and Equitable Quality Education for Achieving the Development of a Democratic Citizenship , European Journal of Education and Pedagogy: Vol. 6 No. 2 (2025)