THE AIED ‘CONUNDRUM’ IN EMERGING ASIA

In Conversation with Prof Maria Mercedes T. Rodrigo
Ateneo de Manila University

By Dr David Lim

Maria Mercedes T. Rodrigo is a professor at the Department of Information Systems and Computer Science, Ateneo de Manila University. She is the head of the Ateneo Laboratory for the Learning Sciences, and co-head of the Ateneo Virtual, Augmented, and Mixed Reality Laboratory. Her research work focuses on artificial intelligence in education, learning analytics, and educational games. Her current projects include virtual reality games for informal education and the use of large language models (LLMs) for medical education. She is on the Executive Committee of the International Artificial Intelligence Education Society (IAIED). In 2021, Prof Rodrigo received the Distinguished Researcher Award from the Asia-Pacific Society for Computers in Education (APSCE) and is President of APSCE.

Coming into ODDE

Dr David Lim [DL]: Thank you for agreeing to engage in this conversation with us, Prof Maria Mercedes T. Rodrigo. You would be interested to know that, prior to your formal appearance here in this column, our readers would have encountered your name twice in past issues, courtesy of our guest columnist, Prof Junhong Xiao, who cited your work in his commentaries on educational technology (EdTech), artificial intelligence (AI), and artificial intelligence in education (AIED) in Asia. Specifically, it was the case you made in your article “Is the AIED Conundrum a First-World Problem” (2024) that first caught our attention. Your take on AIED is one of the few from emerging Asia we have come across that, firstly, breaks away from the standard rhetoric of merely extolling the salvific potential of AI; and that, secondly, demonstrates a keen awareness of AI’s varying impact on different groups of people in various educational contexts.

Before getting into that, though, please share with us how you first came to pursue computer science and develop a specific interest in AI and AIED. What is it about these areas that sparks your interest? Has gender ever been a factor – a hindrance or an aid – for you in the field of computer science which has traditionally been dominated by men?

Prof Maria Mercedes T. Rodrigo [MR]:: When I was younger, my interests revolved around two main areas – computer science and education. I did my PhD on Technology in Education and when I was done I was looking for work that married these two interests. I started by working on interactive educational storybooks, and while those were useful, they were technologically simple. I was wondering whether there was something with a heavier computer science dimension to it. With some searching and with help from friends and colleagues, I found a community of practice working in AI in Education. I knew immediately that this was what I had been looking for. I attended a conference on Intelligent Tutoring Systems hosted in Taiwan and I listened to a workshop on Affective Computing. It was the first time I had seen a decision tree analysis. I attended a workshop on Educational Data Mining and that was the first time I heard of a bottom-out hint or gaming the system. I thought at that point that this was the direction I wanted my intellectual life to go.

On gender, I’m happy to say that it wasn’t an impediment for me. Computer Science is a male-dominated field, but I never felt like my peers disrespected me in any way. I felt seen as an equal, and I felt respected. That said, I did have to take some time off when I became a mother. I decided to pursue my PhD at around that time, so I was still able to push my career forward during that time.

I found a community of practice working in AI in Education. I knew immediately that this was what I had been looking for.


The AIED Conundrum

DL: Jumping right into “Is the AIED Conundrum a First-World Problem”, which was published earlier this year in the International Journal of Artificial Intelligence in Education, you begin the article by acknowledging that the Philippines has a “national educational crisis” and that the crisis has been in existence for decades. As in many emerging countries, the Philippines has a severely under-resourced education ecosystem. Schools lack teachers and infrastructure, and students perform poorly in standardized international tests. You argue that students from such environments would theoretically stand to benefit the most from AI-powered education but the reality is they have no access to it. Even working computers are scarce, as are high-speed internet access that is reliable and affordable.

Moreover, you write, in the Philippines, curricula from basic to higher education are “highly centralized and regulated by the government” and “generally do not incorporate the use of AIED in classroom practice.”

The problems that we found 30 years ago are still the same problems that we find today – we lack equipment, we lack connectivity, we lack teacher training, and so on.


Having laid the groundwork, you put forward your key contention, which I quote in full:

Under these circumstances, how can the Philippines participate in the AIED community? The conundrum that is posed about how AIED “perpetuates poor pedagogic practices, datafication, and […] classroom surveillance” is very much a first-world problem. It is not one that we have at this time because we do not have the infrastructure to deploy AI-based educational applications at scale. I would like to think that, eventually, AI will become a pervasive reality in our educational system. It therefore seems appropriate that, at this nascent stage, we should participate in the AIED conversation in order to open up “new avenues of research and innovation that address pedagogy, cognition, human rights, and social justice.


To put things in perspective for our readers, the foregoing quote is part of a longer piece that serves as a response to the framing of a panel discussion that took place at the AIED 2022 Conference held in Durham, UK. Post-conference, the panel members, including yourself, were invited to contribute an opinion piece each on the issues raised in the panel discussion for a special issue published in March 2024. As Wayne Holmes explains in “AIED: Coming of Age?” (2024), the preface to the special issue, the framing of the conference, which he worded, was deliberately challenging to provoke thought and debate. This, too, deserves to be quoted in full:

The AIED community has researched the application of AI in educational settings for more than forty years. Today, many AIED successes have been commercialised – such that, around the world, there are as many as thirty multi- million-dollar-funded AIED corporations, and a market expected to be worth $6 billion within two years. At the same time, AIED has been criticised for perpetuating poor pedagogic practices, datafication, and introducing classroom surveillance. The commercialisation and critique of AIED presents the AIED academic community with a conundrum. Does it carry on regardless, continue its traditional focus, researching AI applications to support students, in ever more fine detail? Or does it seek a new role? Should the AIED community reposition itself, building on past successes but opening new avenues of research and innovation that address pedagogy, cognition, human rights, and social justice? [original italicisation removed]


Reading your contention as part of a larger piece written in response to the panel provocation quoted above, it seems to me that what you mean to convey is somewhat different from what the extract read on its own would suggest. It seems to me that what you are effectively arguing is not that “poor pedagogic practices, datafication, and […] classroom surveillance” are problems of the first world that are irrelevant and can therefore be ignored by the Philippines and other similarly resource-constrained emerging countries.

In my reading, you are suggesting, rather, that although the Philippines and similar emerging countries have yet to experience the aforementioned AIED problems first-hand, since they have yet to deploy AIED at scale, they should nonetheless seek to obtain a better grasp of these issues (alongside but not at the expense of other AIED issues) and, in doing so, contribute to the development of AIED, if not technologically, then discursively. This is so that they can claim “a seat at the table” of the AIED global community, as you put it, and be better equipped to anticipate the benefits and pre-empt the potential harms of their own future AIED initiatives.

Was this what you meant to convey when you wrote the commentary? If not, would you say that there is merit in the trajectory of thought I have just outlined? Also, based on your decades-long experience and contribution to the field, would you say that, in emerging Asia, if not also beyond, there is a lack of critical reflection on the ethical implications as well as the risks of AIED? I ask this also in light of the systematic review by Zawacki-Richter et al. (2019) which came to the foregoing conclusion, albeit based on selected research publications on AIED in the context of higher education without geographical identification.

MR: Yes, you’re exactly right. We don’t have the privilege of having AI (or even technology) widely infused in our classrooms. I’ve been working in the area of technology in education for nearly 30 years and I have collaborated with or read the work of other researchers from the Philippines who have done survey after survey of the state of technology in the classrooms. The problems that we found 30 years ago are still the same problems that we find today – we lack equipment, we lack connectivity, we lack teacher training, and so on. Victoria Tinio said this in a paper she wrote in 2003 titled “ICT in Education”. I said this in a paper I published in 2005 titled “Quantifying the Divide: A Comparison of ICT Usage of Schools in Metro Manila and IEA-Surveyed Countries”. Fast forward 20 years later, I did a review, “Impediments to AIS Adoption in the Philippines” (2021), that found the same problems.

It’s therefore sometimes hard to start the conversation about using AI without sounding detached from the realities on the ground. Each school year begins with newspaper headlines talking about resource shortages. Schools need classrooms, books, chairs, chalk, textbooks, electricity, water. To bring up AI and the problem of datafication, classroom surveillance, and so on, is not relatable.

If we don’t engage in the conversation about AI and the cultural difference, how will these characteristics be factored into educational software design?


Some people believe that AI is different from prior technologies, that it is poised to be a real game changer, that it will help leapfrog our progress. Choose your metaphor. Maybe it will, but it won’t do it on its own. We will still have to address the more basic problems of infrastructure, connectivity, teacher training, curriculum, etc.

This is not to say that we should shy away from the conversation about AI. There are several factors that are going in AI’s favor. First is that there is already a wide penetration of mobile technology in the Philippines. Second, it is possible to get Internet “on the cheap.” There are plans where you can get data access for a few pesos per day. These present opportunities. If we can innovate low-bandwidth, AI-based solutions that can run on mobile, we can deliver meaningful, high-quality educational content to our learners in this way.

We do need more critical reflection about ethical use. Many people I’ve spoken to are preoccupied with concerns about academic integrity. Academic integrity is a challenge, of course, but we shouldn’t get stuck there. To me the challenge is that of innovating authentic assessments.

Other people have brought up issues of data privacy, of the protection of the data of our students. We also need to think about the relationship between our students’ international test results and their capacity to understand informed consent. Do they and their parents actually understand that we are collecting all this data?

Opportunities for New Cross-Cultural Research

DL: In the aforementioned article, you argue, as well, that, although the AIED research agenda is predominantly driven by scholars from what Joseph Henrich coined as WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries, there are still opportunities for scholars from the emerging world with “less advanced research traditions” to offer something new.

As example, you mention in the article how you have conducted cross-cultural research that sought to replicate experiments previously done in developed countries in order to “contribute new insights regarding the ways students from different cultures were similar or varied” in their response to certain phenomena such as educational technology innovations.

It is precisely this recontextualizing focus of your research that resonates with Prof Xiao who, in his commentary in Issue 21 of inspired, urges Asian researchers in online, distance, and digital education (ODDE) to take a leaf from your book by adopting a context-sensitive approach to the research and praxis of innovations imported from outside local contexts. Coincidentally, the same topic is also examined by Prof Insung Jung in her article, “A Contextualization-Generalization-Recontextualization Cycle in Open and Distance Education Theory Building and Application” (2020), the English translation of which was published in Issue 22.

I think what’s particularly challenging about AI research is that we don’t actually know what its downstream effects will be.


In EdTech, AI and AIED, how pervasive is the problem of innovations – be they technologies, theories, or frameworks – originating from external contexts being imported wholesale into local thought, research, and application, and taken as having universal validity? How likely is such adoption to lead to the distortion of local realities over time, and to misguided practices and policies, and for these to persist even after the assumed universality of these imported innovations have been contested or debunked in scholarly circles?

MR: I don’t want to sound overly harsh or critical of WEIRD countries. They have pioneered foundational research that drives much of our thought and innovation today. I think you do understand my point, though, that cultures vary, sometimes in ways that affect the impact of an educational approach. Students from different countries are different. The power distance between students and teachers is different. Respect for authority is different. Levels of independence and interdependence are different. So teaching and learning approaches need to adjust as well.

Let me give a few examples. We did a study on cross-cultural help-seeking about a decade ago. We found that students in Costa Rica tend to be more collaborative than students from the Philippines and the United States. It is therefore recommended that any computer-based technologies deployed in Costa Rica should support collaboration. Another study we did around the same time found that students in the US tended to be off-task more than students in the Philippines, but students in the Philippines tend to game the system more than students in the US. Gaming the system refers to students’ exploitation of system features to progress through the educational material without actually learning anything. Examples of gaming the system include hint abuse and systematic guessing. This, we suspect, is because Filipino culture is very conscious of appearances. Gaming the system still looks like work, as opposed to off-task behavior that tends to be overt defiance of the assigned task. If we don’t engage in the conversation about AI and the cultural difference, how will these characteristics be factored into educational software design?

Power Asymmetry and Ethics in Research

DL: Just as noteworthy as your advocacy for recontextualizing research is the fact that the issue of asymmetrical power relations in research is addressed at all. Again, in “Is the AIED Conundrum a First-World Problem”, you raise what is an open secret to many but broached by few.

You cited numerous instances of said power imbalances, including researchers in emerging countries having to contend with their peers from the developed world conducting research in their host countries with outsized research funding and racial capital that grants them automatic privilege, credibility, and goodwill. There is also the scenario where researchers in emerging countries must navigate internal power relations that, in contrast to the previous example, work in their favour but throw up a whole series of ethical questions, nonetheless. For instance, they may have to weigh the ethics of excluding from their studies those schools that lack computers or are geographically remote for practical reasons. By virtue of their relatively privileged positions, these local researchers may be able to easily obtain signed parental consent and student assent, but what are the ethical implications if both parents and students have low literacy? Additionally, what is the value of the research they publish if “it has no immediate benefits to teachers and students, other than perhaps a fancy show-and-tell”?

As you made clear, AIED researchers in emerging countries such as the Philippines face a multitude of ethical concerns, many of which are more immediate, basic, and pressing than seemingly more distant issues such as the commercialization of AIED and critiques that AIED has led to poor pedagogic practices, datafication, and classroom surveillance. Based on your experience, how ethically conscientious about power does the conceptual AIED researcher from emerging Asia tend to be, can afford to be, and should be, if AIED were to ever be deployed at scale in their respective countries?

I ask this because there appears to be this silent but widespread belief among some AIED stakeholders, including in emerging Asia, that, if AIED is ever to take off, one must simply push ahead and introduce it by trial and error, without being distracted or held back by ethical critiques of problems that are deemed either fundamentally technical and fixable over time, or dreamt up by worrywarts and closet luddites who lack a basic understanding of the technologies they critique. How would you respond to this?

MR: First, I think that AIED researchers are trying to do good things. I have been part of the Asia-Pacific Society for Computers in Education since 1997 and the International Artificial Intelligence in Education Society since 2011. They are trying to contribute to the mental and emotional well-being of future generations. I have no doubt about that.

In WEIRD countries, ethical standards for conducting research have been around for decades, and are followed quite strictly. In the Philippines, the explicit practice of reviewing research designs for respect, beneficence and non-maleficence, justice, informed consent, confidentiality and so only started in my university probably 10 years ago. Other universities in the country are establishing their own equivalents of research ethics offices or institutional review boards, but the process is slow and painstaking and resource-intensive, so I understand if people react negatively to their imposition.

However, as someone who tries very hard to comply with ethical standards and guidelines, I do believe that ethical research is good research. If we don’t follow ethical standards, can we actually claim that what we are doing is good science?

Going beyond the ethics of experimentation, I hear over and over again about problems with attribution and authorship. Some mentors and many students don’t understand when authorship is warranted and when it is not. Some mentors and students don’t understand how to measure substantial creative contribution. This opens up the system to abuse. Mentors claim authorship of student work without having contributed to the work meaningfully. Authorship becomes political – the mentor with the power can force the issue, and this kind of forced authorship is a form of intellectual theft. In other, more positive cases, some students look upon their mentors with utmost devotion and adoration, so even if the mentor did not contribute much to the work, the students are so grateful that the moral support alone is enough to warrant authorship. Is this Filipino culture again at work?

Ultimately, it is important for researchers to be aware of ethical standards of research and publication and to try to follow them to the extent possible. If not, we’re not doing good science.

Without the right social and political circumstances, technology interventions will fail


I think what’s particularly challenging about AI research is that we don’t actually know what its downstream effects will be. This is true of many fields of innovation but it seems to be particularly true of AI, for at least two reasons. First, everything is connected. Remember the CrowdStrike update that took down major systems all over the world? Second, there is so much money behind AI. Billions and billions of dollars fund AI innovation. The financial incentives are so mindbogglingly huge that the worries and fears of the rest of humanity are easily overwhelmed. The challenge, therefore, is to estimate the potential harms that may come from an ill-conceived AI application, and to plan for these risks, to the extent that this is possible, in the face of these countervailing forces.

The Human Dimension of AI

DL: It is understandable that AIED stakeholders in emerging Asia are eager to deploy AIED in schools and universities as swiftly as possible, constrained resources notwithstanding, and to see everyone, especially students, motivated to pursue AI as an interest, if not as a career, benefit from AIED, and become “robot-proof” in the future of work, to pluck a phrase from Joseph Aoun (2017).

[I]t can sometimes feel like you aren’t having enough of an impact on your area of research. When you start to feel this way, it’s important to remind yourself where in the spectrum you are and what you can reasonably accomplish in that spot. Stay hopeful, keep working.


Enthusiastic narratives from tech companies, researchers, and media have fuelled a sense of urgency around AI adoption. Indeed, especially in the emerging world, AI has come to be imbued with salvific potential. Whenever Asia is framed as having significant scope for technological leapfrogging that will allow it to catch up with the developed world, AI is almost always mentioned in the same breath. It is only to be expected, then, that, emerging countries, too, will sooner or later begin building and implementing AI curricula in schools and universities, just as developed countries have started doing.

Even here, in the formulation of AI curricula, we are likely to encounter the same “conundrum” we have been discussing. As Wayne Holmes notes in “AIED – Coming of Age?”:

such curricula typically focus on how AI works and how to create it (the technological dimension of AI) and rarely spend much time on its impact on, or the social justice implications for, humans and wider society (the human dimension of AI), which includes ethical questions centred on power and political motivations. Yes, frequently there is a nod to the ethics of AI (usually instantiated as biases), but often this is almost as an afterthought, once the ‘sexier’ topics (e.g., machine learning and large language models) have been studied.


In your view, are the Philippines, and Southeast Asia more broadly, paying sufficient attention to the “human dimension of AI” as opposed to the “technological dimension”? Should we be paying more or at least equal attention to the former? At any rate, to what extent have we managed to develop the technological side of AI, as opposed to importing technology wholesale from other countries?

MR: I can’t really speak for other Southeast Asian countries, so let me limit my response to the Philippines. The problem we see is that there is very little orchestration about how technologies are chosen, designed, deployed. Choices become highly politicized. As far back the mid-1990s, for example, the Philippines Department of Education embarked on a modernization program that was supposed to equip schools with computers. The Department had to choose between PCs and Apples. They ultimately went with half-and-half. This made software acquisitions, teacher training, maintenance, and so on, much more complex. Now, the problems are similar. Vendors push the technologies they are selling, with no higher-level strategy to guide choices, and limited anticipation of downstream implications.

In Tinio’s (2003) article that I cited earlier, she asks whether ICT-enhanced educational projects are sustainable. She then identifies the dimensions of sustainability – economic, social, political, and technological. Economic sustainability is the ability of schools to finance the program over the long term and refers the total cost of ownership, from initial acquisition to replacement. Social sustainability refers to community involvement, the buy-in and support of the parents, business leaders, and other stakeholders. Any introduction of technology is disruptive and will therefore face resistance. Political sustainability refers to the will to meet that resistance and manage the change. Finally, technological sustainability refers to choosing technologies that will be effective in the long term, in the face of obsolescence. Notice that two out of the four factors are people-dependent. Without the right social and political circumstances, technology interventions will fail, so there has to be substantial effort invested in working with the social and political environment.

A word about technological sustainability. This is becoming a greater and greater challenge. There used to be a time when you could create an educational program that would run on several generations of computers and their operating systems. This is no longer the case, especially for mobile technologies. Pricing strategies for development platforms change. Technological alliances dissolve. Features that were supported in one version of a platform are deprecated in the next. Application programming interfaces (APIs) keep changing. If you don’t have the funding and the staffing to keep your apps updated, your apps are going to be delisted from the various distribution platforms.

Conclusion: Advice for Scholars

DL: Lastly, what advice would you give scholars from emerging Asia from across disciplines who are interested in gaining a better, more critical understanding of AI not just as a form of computing but also as “a form of knowledge production, a paradigm for social organization and a political project”, to cite Dan McQuillan (2022)?

MR: I think that young researchers have to reflect repeatedly on where in the spectrum of research and development their work lies. If you wish to be in the blue skies side of the spectrum, then the contributions they stand to make will be theoretical, perhaps even foundational. If they are in the more applied side of the spectrum, then the contributions will have more direct effects on users and consumers. Many of us are in the middle, and that can be frustrating because it can sometimes feel like you aren’t having enough of an impact on your area of research. When you start to feel this way, it’s important to remind yourself where in the spectrum you are and what you can reasonably accomplish in that spot. Stay hopeful, keep working.

DL: Thank you very much, Prof Rodrigo. It has been a pleasure and an honour.

MR:Thank you for the invitation. It was a pleasure for me as well!