Edspresso episode 19: Erica Southgate

Image: Associate Professor Erica Southgate

In this episode, we speak to Associate Professor Erica Southgate about the importance of teacher involvement in the design and implementation of AI in schools, and some potential benefits and risks of emerging technologies in classrooms.

Erica is a teacher-educator, researcher and Associate Professor of Emerging Technologies for Education at the University of Newcastle. Erica is the lead author of the recent Australian Government commissioned report: ‘Artificial Intelligence and Emerging Technologies in Schools’ (2019) and a maker of award-winning computer games for literacy learning. To find out more about Erica’s research please visit www.ericasouthgateonline.wordpress.com. If you are a teacher, school leader or policymaker who would like to be interviewed for Erica’s research into the ethics of AI in education, Erica invites you to connect with her via email (Erica.Southgate@newcastle.edu.au).

The views expressed in Edspressos are those of the interviewees and do not necessarily represent the views of the NSW Department of Education.


Advancing learning through AI: insights from a NSW teacher-educator and emerging technology researcher, Erica Southgate

SPEAKER:  

Welcome to the New South Wales Department of Education's Edspresso Series. These short podcasts are part of the work of the Education for a Changing World initiative, and explore the types of skills and knowledge that students might need in a rapidly changing world. Join us as we speak to a range of experts about how emerging technologies, such as artificial intelligence (or ‘AI’), are likely to change the world around us, and what this might mean for education. 

This episode features a recent conversation between Bronwyn Ledgard in Schools Policy, and Erica Southgate, Associate Professor of Emerging Technologies for Education at the University of Newcastle. This interview was conducted virtually due to COVID-19.  

In this episode, we talk to Erica about: some of the ways in which emerging technologies such as AI might come to be used in our classrooms and some of ways in which it’s already used. We also talk about some of the questions we should ask about emerging technologies to ensure they are used ethically and appropriately to support teaching and learning in our schools. 

 
BRONWYN LEDGARD: 

Hello, I’m Bronwyn Ledgard, Manager of Strategic Analysis in Schools Policy at the NSW Department of Education. I’d like to begin by acknowledging the traditional custodians of the land upon which we are meeting today, albeit virtually — I'm on Gundungurra and Dharug lands in the beautiful Blue Mountains — and pay my respect to their elders, past and present. Today, I’m speaking with Erica Southgate, Associate Professor of Emerging Technologies for Education at the University of Newcastle, here in New South Wales. Erica – you’re a teacher educator and researcher who’s thought a lot about how we might use technology to assist with education. Could you tell us a little bit about yourself, and how you became interested in this area of research? 

ERICA SOUTHGATE:  

Thank you, Bronwyn. I'd like to acknowledge that I'm meeting, or that I’m talking, at present, on Awabakal land and pay respects to elders past and present and to my Aboriginal colleagues everywhere and students.  

I'm an Associate Professor of Emerging Technologies for Education, which is a really cool and wonderful field at the University of Newcastle. I'm the lead researcher of the VR (Virtual Reality) School Project, which was the first study internationally to embed high-end virtual reality in school classrooms; and that was done in local government school classrooms actually, at both Callaghan College and Dungog High School. 

I'm really interested in technology ethics for immersive learning — so using virtual, augmented and mixed reality — what are the ethical implications of that for teachers and students. And I'm interested in artificial intelligence, and particularly the ethics of artificial intelligence, and how we might govern that in schooling systems. And I'm also interested in a range of emerging technologies and what that means for the way we're viewed as humans, and the way we view our self as humans and others as humans. I’m interested in the implications of that, for instance, in the emerging field of brain computer interface, and in biometrics or the measurement, harvesting and interpretation of bodily data. There are a whole lot of interesting fields; anything that catches my eye or that might be interesting or efficacious for learning, or that has ethical implications, I’m on it. 

BRONWYN LEDGARD: 

It does sound incredibly interesting and lots of fun. You were recently part of a group of researchers who prepared a report for the Australian Government on the implications of emerging technologies, such as artificial intelligence (or ‘AI’), for education. Of course, the idea of using computer-type technologies to support learning isn’t new. What makes AI-based technology so different to 20th century educational software and computer programs? 

ERICA SOUTHGATE:  

Okay, so it's always good to understand what AI is because it's a really interesting field. The OECD define AI as a machine-based system that for a given set of human objectives, can make predictions, and recommendations or decisions influencing real or virtual environments with varying levels of autonomy. So the machine can learn and think, by itself, to make decisions with varying levels of independence from human oversight. I suppose, that's the best way to describe AI.  

It's been around since the 1950s. It's an area which has recently taken off because of a whole lot of technological improvement, including cloud computing. The kind of advances in what we call machine vision technologies, and language processing.  

There’s a subfield of AI called machine learning, which is really devoted to getting machines to learn by themselves by harvesting data, sometimes by labelling that data, interpreting the data themselves to make categorisations and predictions. So it's an interesting field, because it raises a lot of questions about the role of machines in our lives, to make decisions for and about us. 

And there are certainly other technologies that have influenced human behaviour, there's no doubt about that. Anything from the pencil, or the pen, or, the ink nib – that's a technology, you know, books are a technology that’s certainly had a huge influence. However, we really need to think about, what it means to live in a machine age, when machines can influence what we know, what we can do, and some would argue, who we can be, or our life opportunities. This makes AI a very unique, fascinating and complex technology to deal with. 

BRONWYN LEDGARD: 

I'm always reminded of the fact that in ancient Roman schools, they also had tablets, of the wax kind, but yes, technology has been around for quite some time. So, to what extent is AI in our classrooms today? 

ERICA SOUTHGATE:  

Okay, so there's a couple of ways to think about AI. I always like to think of AI as powering the everyday applications that we use. AI is in noise suppression, for instance, when we use teleconferencing. It powers search engines. It does it very, very well and at a scale and speed that humans just couldn't keep up with. It's much better than humans, at capturing, organising and presenting information at speed.  

For instance, I opened up my PowerPoint presentation the other day for a lecture and I was given design ideas. The machine has read the document, interpreted the visual layout of the document, and perhaps the words as well (I'm unsure what the algorithm’s actually picking up) and it’s presenting me with a beautiful way to present my slides. All the text was presented within a magically beautiful piece of graphic design. And then, I became so interested that I kept on looking at the different design ideas, and didn't finish the PowerPoint presentation! 

AI is in lots of computing applications, either user-facing, for instance, chatbots that talk to us, or at the back end of operations, making the application run. If we want to find out where something is, a chatbot, might pop up, it might vocalise, or it might just be, text on a screen. It will tell us where we can find information, make recommendations, and we're all very familiar with this, in terms of online advertising. Recommender AI is everywhere. It harvests our data in real time, and it recommends stuff to us. 

Then there are adaptive systems. These are the sort of systems which computer scientists dream of building, which are like intelligent tutoring systems, for instance, which adapt and personalise learning. So, we do our learning online and the machine knows when we don't understand something; it'll question us; it'll adapt the curriculum or the pedagogical approach so that we learn better. These systems have been built, some of them work reasonably well within certain domains of knowledge, and some of them don’t. They are definitely out there and being developed right now. 

And in terms of recommender, and adaptive systems, there’s also a thing called pedagogical agents. There's this view that we’ll have helpers, in particular applications or systems, that will assist us in learning in a very personalised way. Some people have even suggested that, these little pedagogical helpers, or pedagogical agents, will follow us through our lifelong learning journey and keep a record of what we need to do, what we need to know, where we did well, where we didn't do so well. I'm not quite sure I’d really like that. It has a bit of an implication when you've got a little pedagogical agent following you around keeping information about you – I mean, it could be useful. 

And also AI also powers the back end of systems. It powers the data and analytics, it powers data mining, interpretation, categorisation, clustering of data and the modelling that goes into these data. AI, actually sits at the back end of the system as well as the user-facing side. All student data in the learning management system is gathered, analysed and turned into data analytics. It's visualised in terms of a dashboard. You can see as an educator or as a teacher, who might be a little bit behind in the online work and who might be a bit ahead, in terms of listening or viewing information. The data captures things like: who hasn't logged in for a while and who might be at risk of attrition or failure. We've had these types of analytic dashboards available to us in higher education, and they're coming in learning management systems in school education. I think, that we’ll get much more integration of AI when it's put into learning management systems.  

So, AI is here, it's on the horizon, we can see it coming. The speed at which it comes, I can't predict. But I don't think we should wait too long to develop a, kind of, foundational understanding of what AI is and how machine learning works. 

BRONWYN LEDGARD: 

That's very interesting. As a teacher yourself, of adults, are there things you wouldn't want AI to be doing or automating for you? Are there risks in allowing it to do too much, do you think?  

ERICA SOUTHGATE:  

I think that we've got some relatively recent examples of when people didn't think through automated decision making. They didn't think through the complexity of the technology, and how to understand it and, kind of, interrogate that critically and they didn't foresee the kind of impact that could have on humans. 

A good example of that is Robodebt. There are now some very good guidelines on automation and use of automated systems in public service administration, which were produced by the Commonwealth ombudsman, which would have been better developed at the start of the technology. So, we don't really want to learn from our mistakes when people are seriously harmed. 

There's the example of the robo-marking controversy from a few years ago, where there was a stoush around the marking of NAPLAN. There were two quite potent arguments on each side about why robo-marking was useful. But really, what should have happened was a much greater and broader and deeper school and community-based conversation about that. So when we don’t have broader conversations, when we don’t have pedagogical projects around new or emerging technologies, when we have very different evidential perspectives on effectiveness for educational learning, and the ethical implications of it, it becomes problematic. 

I always say that the very foundation of education is the ability to explain stuff. As a teacher and an educator, as a teacher-educator, I need to explain content, I need to explain skills-based learning or procedural learning. I need to explain my pedagogical decision-making, and relate that to evidence. I need to explain assessment and my grading and my whole assessment dynamic around that; I need to explain how that works to students. I need to explain the decisions I make around that, why I might develop curriculum around a particular model versus another model. There's a whole lot of stuff I need to explain — not just content, but the very craft of my teaching.  

If we use systems where we can’t explain why a machine made the decision that it did, because of either the complexity of the machine learning used (for instance, the use of deep learning or neural nets), or we haven’t got access to the training data used to train the machine, so we can’t understand the decisions which have gone into that. Say for an intelligent tutoring system and why one student was given a particular curriculum pathway, where another wasn't. If we can’t explain that as teachers, or we can’t explain why certain students are flagged as ‘at-risk’ or categorised in potentially stigmatised ways, with real impacts on them, as humans, and their life opportunity, then this is problematic.  

If companies or vendors creating this software have got proprietary algorithms and won't open them up, transparently, for independent expert analysis, and there aren't functions built into those systems where the machine can outline or audit its own decision-making process, then fundamentally, the technology undermines explainability as a foundation of education. 

As teachers, we need to guard against that. If a vendor can't explain, using a robust peer-reviewed evidence base, the effectiveness of a particular application for learning, then we shouldn't buy the product. We shouldn't be experimenting with these types of products on children and young people and teachers because we need to be able to explain and potentially also predict when things may go wrong. 

BRONWYN LEDGARD:  

Do you see there’s potential for these emerging technologies to support students with additional learning needs and disabilities? But also, are there particular aspects for vulnerable populations that we need to protect against with these technologies in that case? 

ERICA SOUTHGATE:  

AI can be, you know, quite amazing in terms of what it could do for people who are differently abled. There's some suggestion that it could be used to be quite an adaptive system or to provide pedagogical agents which are really appealing to particular people with different abilities. There's a suggestion that, things like computer vision will be really useful for people with visual impairment, and there are already applications like that, in fact. There's a potential possibility that, natural language processing might be able to be developed, so that people who speak differently, because they have a disability, can use online tools very effectively, to search verbally rather than through typing. This comes down to the integrity and the diversity of the data sets and the data that's being harvested. There is evidence, for instance, people who have accents or who speak differently, because they have a disability, or because they come from a different socio-cultural background, or even because they’re children and have higher voices, they aren’t recognised or understood by AI systems. This is because they [AI systems] are trained on culturally normative and ablest normative data sets, and data sets only with adults.  

So it really comes down to the potential of the technology to offer the kind of tools to help people with a disability, for instance, or people from different backgrounds, socio cultural, linguistic backgrounds. But to do that, we really do need to very clearly understand the data process: the data that goes in (the inputs), and then what's coming out in terms of decision making, the decision making of the machine to assist the human, but also what's going on between the input and the output (so, the algorithmic process there). 

There's a lot of literature on AI ethics, people have been writing about this for a long time. There's a suggestion that there are particular domains or fields where humans are involved where there should not be black box AI, that is AI, where we don't understand why the machine or the algorithm is making the decisions that it is, or that it’s proprietary - business or government won't let us look inside the box.  

There’s a particular position that no black box AI should be used in sensitive domains, human domains, such as: criminal justice systems, welfare systems, and education. That's because, those humans are vulnerable, they're not necessarily powerful enough or knowledgeable enough to challenge the system. Even if you understand this technology, it's very difficult to challenge the system, because you need to be able to understand how algorithms work, you need very deep statistical, mathematical knowledge to be able to actually explore this. Or, you need someone to be able to translate that for you through a transparent auditability trail, and you need somebody be able to do that who's an independent expert, who's not part of the manufacturing or vendor industry process. I think we really do need to think about that, particularly for vulnerable populations: who are the independent experts here who can provide oversight? So we need to think about the data that’s used, the assumptions in the data, who's training the data, or what sort of data is being harvested with some AI in real time to understand how biases can be learned and amplified in machine learning.  

There's a very interesting paper that was published this year, out of the UK, by Wachter and colleagues (Wachter, Mittelstadt & Russell, 2020). It points out that we're entering a world where machines may discriminate in ways that are different to humans. We’re used to understanding discrimination, legally and as humans, in terms of, particular groups of people, so: gender discrimination, discrimination against people with a disability, racial discrimination, for instance. But Wachter and colleagues point out that the machines may discriminate in ways that we don't understand, and that we actually can't discern. So the machine may produce decisions, which are discriminatory in ways that don't relate to the groups we usually associate with discrimination as a machine-learning outcome, and can actually create discriminatory effects that we may not ever be able to discern. And if we can discern them, it may be long after the fact when harm has been done. They [Wachter and colleagues] argue that AI may very well reshape discrimination as humans understand it.  

If we can't understand the ways machines discriminate, if they're discriminating in ways that aren't conventionally understood by law, if we can't collect data or information to create a prima facie case around discrimination, because the algorithm is proprietary or in a black box type of AI, then this becomes a problem for humans. And it becomes a particular problem if you're the human being discriminated against or from a group or a profile group being discriminated against. And really, they [Wachter and colleagues] raises the question: can we automate fairness, and how we do that? 

BRONWYN LEDGARD: 

There are also some risks around inclusivity that you've pointed to in your recent report for the Australian Government. What do we need to do to ensure equity in education, do you think, where emerging technologies are concerned? 

ERICA SOUTHGATE:  

I always say that with any new technology, particularly a technology where there’s been flags raised around harm and risk – and not all AI presents the same level of risk. When I'm using my PowerPoint design ideas, the only risk I have is that I'll be diverted off my task of finishing my PowerPoint lecture; that's a risk, that I'll become so interested in how beautiful it is, I won't be able to, you know, to finish it. So not all AI presents the same level of risk.  

However, if we are going to use particular systems with students for learning, then we need to ask some questions about that system for learning. We argued in our report that we wrote for the Commonwealth Government, which was published last year, that these technologies need to be very carefully incubated in place, with very strong ethical frameworks around them, and governance, oversight and accountability mechanisms. I mean, I've often heard some education experts say, oh, in the future, all students will have their personal computer tutor, and then the teacher will be freed up to do other things, like really connect with students. That's a very naive and educationally strange point of view, because I think what we need to do is not just assume that somehow the perfect machine program will arrive and that it will differentiate and adapt and respond to human need.  

We need to understand that, these types of technologies are built on data, big data flows that are harvested and interpreted by both humans and machines. And that, if we're going to use them, we need to use them carefully with learners, and we need to develop an evidence base that they're effective for learning. Why would we put it in a classroom unless we really knew they were going to have some benefit or advantage that a human teacher wouldn't give? Or why would we argue that, somehow teachers will be freed up to do all this other stuff, when in fact, the idea of a teacher is to personalise and adapt learning, not to be replaced by machines. So I don't see machines replacing teachers, but I do think that when we introduce this technology, we do really need to be very mindful and careful about it. We need to think about: how we manage risk beforehand, think about opening up the algorithm, think about what's happening in-between input and output, and really just be careful that we're not denying children and young people, their human rights. Everyone has a right to privacy, everyone has a right to bodily integrity. For instance, when there are suggestions that we'll use facial recognition technology for roll call in schools, we should ask: is that a good use of it? Is that a good use of harvesting people's bodily information and behavioural information associated with their bodies for a task which teachers do in the morning quite efficiently, and they use that time to connect and learn about their students and what's going on; should that task be replaced by a machine? So there may be, lots of good uses of the technology, but we really just need to carefully incubate it in place and be cognisant of not actually creating human rights issues in schools around it. 

We need to live in a world where data isn't our destiny. We used to say, the family you are born into, the community you're born into, shouldn't be your destiny in this country. You know, everyone should have a fair go. So when we use data – and remember, all of this data is data to a degree that's in the past, not the present, it's already happened, it's already been generated – this data shouldn't be used to determine what opportunities we have. It shouldn't be used to categorise us in ways which can stigmatize. When we're presented with an analytic dashboard which labels a learner, we should really question that, we should understand what that label means and maybe the unintended effects of that. So data shouldn't be our destiny.  

I was actually talking to a computer scientist, someone who does a lot of work around cybersecurity, and this person spoke about this new technology and particularly the use of biometrics and the very subtle introduction of biometric harvesting — that is, the harvesting of our face and our voice and tracking our movements — that this is really is a new phrenology. Which I thought was really clever, but also very frightening. So phrenology is a debunked science where people thought they could tell if you were criminals by the lumps and bumps on your head, and your face, your skull bone structure, I suppose, a hundred years ago. And it [phrenology] was used as the basis for eugenics, actually. But she [the computer scientist] said this new kind of biometric harvesting and the use of AI to make decisions, about for instance who's engaged based on their gaze pattern, or their pupil dilation, is really, inappropriate and not scientific at all. And yet, this type of technology is being incorporated into online examination applications at the moment. So if students who are doing an online examination look away too much during the examination, it raises a red flag in the system which people can [use] to then determine whether they're actually maybe looking at the answers on their wall, which can influence whether you're considered someone who's cheated in the exam or not. 

And I'm particularly interested in and concerned about biometrics, and its introduction in and biofeedback in education. I don't want to be viewed as a, kind of, a meat sack of data. That's against my right to bodily integrity, a right that's been fought for long and hard by women and people of colour, queer people, by indigenous people: the right to bodily integrity. And yet, we have systems now who see us as these big meat sacks of data that we can harvest, break down into pixels, break down into micro-movements to process, to try to understand our intent and our emotional state. And it really is the new phrenology.  

Let's not fall in the trap of false science. Let's really, as professionals, as policymakers in education, really understand the science behind it and the evidence base. And we really need to understand: what are our human rights in this space, and for our students to understand, what are the digital rights of the child in this space. And we haven't had enough conversation about that in this country. 

BRONWYN LEDGARD: 

What sort of skills do you think teachers will need in that case, if we're going to have more AI in classrooms? 

ERICA SOUTHGATE:  

Well, I think it's not individual teachers: it's as a profession. I think it's a professional learning issue. So there's a pedagogical project here, where we start to learn about what AI is, we begin to demystify it, so it doesn't feel like magic, but a particular machine-human process., that we have access to independent experts to be able to provide technical advice on the computer science aspects, but also the ethical aspects and the governance aspects of this type of technology, and that the profession begins to become really active in the conversation.  

So to me, there's a pedagogical project: how do we skill teachers up to understand what AI is, what it can do, what it can't do, what the evidence base is, what it isn't. What kinds of questions we should we ask, in terms of procurement process; so, whether a school system, or individual schools procure particular applications or platforms using AI, do they know it's there and what's going on? Do they understand, under the law, for instance, that biometric data is considered sensitive data and has to be handled in particular ways, that there's a whole regulatory framework around privacy impact assessments? It's a complex field.  

Now we can't expect individual teachers to master this or master it overnight, but we can begin a pedagogical process. We're good at pedagogy; we understand teaching; we know when new things come along, that we need to understand and be able to explain it and engage with it. And so the teaching profession is very well-placed, actually, to begin to grapple with the technology and the introduction of it. But we need some leadership in this space, I suppose, and some transparency with government. So if we go and we look at schooling systems, what do their privacy policies look like? What do their data sharing arrangements look like with vendors, is that transparent? If I was a parent, could I go and find that out on a website or by contacting someone? What are the processes? What are the dynamics? How do we educate, not only teachers, but students in school, communities around this? What are the kind of structures there that we have in place in our schooling systems to educate teachers and students through curriculum, but also school communities about this? We actually do really need to deeply engage with all the complexity around it, so that we get good outcomes for the profession and for our students that we care about, and for our communities more broadly. So it's a pedagogical project with a different, more engaged and transparent approach that we need. I'm hoping that we, as a profession, can band together to get that done. 

BRONWYN LEDGARD: 

Thank you, Erica. We have a standard question that we ask people in this series: If you could go back in time and give advice to yourself as a school student, what would you tell yourself to focus on to help you prepare for what was to come, and would you give different advice to students today? 

ERICA SOUTHGATE:  

I grew up in a working class background. You probably hear that from my accent. My family was from the country, they'd moved to Western Sydney and you know, we didn't have a lot of money. I was very fortunate that I went to a government school with great teachers who really opened our eyes to the world and to art, literature, history, geography and to science, and that my family invested in books. 

I learned that you need to be curious about the world; there's always something to learn and investigate. There are always questions that are going to be raised. And from books and from interactions with my teachers, I learned that it's through dialogue with different people that we often find unique perspectives. And so, from my background, I really do value different points of view, dialogue, curiosity, and knowledge, and that’s held me in good stead through a very long and often tumultuous life.  

What I'd say to students now is that, curiosity's the thing, and a thirst for knowledge, openness and interest in different perspectives – all of those things will put you in good stead no matter what topic you investigate, no matter where you land in life, or no matter what barrier is thrown up in front of you. I don't know if that's very wise or not, but that is the main thing I've taken away from life.  

BRONWYN LEDGARD: 

I think it sounds pretty wise to me, Erica. It's been lovely to talk to you. Thank you so much for your time today. We really appreciated the opportunity to talk to you and to hear your thoughts and reflections on AI and emerging technologies and education, and how we might use that to help support student learning and growth. 

SPEAKER: 

Thank you for listening to this episode of the Edspresso Series. An extended version of this interview is available in Issue 3 of our Future EDge publication. You can find a link to this paper in this episode’s description. To find out more about the Education for a Changing World initiative, please visit the New South Wales Department of Education's website. There you can sign up to our mailing list or you can join our conversation on Twitter @education2040. 

 

Reference list:  

Berendt, B., Littlejohn, A., & Blakemore, M. (2020). AI in education: learner choice and fundamental rights, Learning, Media and Technology, 45(3), 312-324, https://doi.org/10.1080/17439884.2020.1786399 

Southgate, E., Blackmore, K., Pieschl, S., Grimes, S., McGuire, J. & Smithers, K. (2018). Artificial intelligence and emerging technologies (virtual, augmented and mixed reality) in schools: A research report. Commissioned by the Australian Government Department of Education. Newcastle: University of Newcastle, Australia. https://docs.education.gov.au/node/53008 

Southgate E, (2020). Artificial intelligence, machine learning and why educators need to skill up now', Professional Voice Journal, 13 (2), https://www.aeuvic.asn.au/professional-voice-1322 

Wachter, S., Mittelstadt, B. & Russell, C. (2020). Why fairness cannot be automated: bridging the gap between EU non-discrimination law and AI, https://ssrn.com/abstract=3547922 

 

Additional resources

An edited version of this interview is available in Issue 3 of our Future EDge publication.

Erica Southgate has also provided the following infographics on technology for education:

A quick guide to artificial intelligence for younger students

A quick guide to artificial intelligence for older students

The power of virtual reality for education

The power of augmented reality for education

Are you a NSW public school teacher who wants to know more about how to embed technology into your teaching?

The department’s Technology 4 Learning (T4L) team support teachers, students and schools with the best technology and advice to create classrooms of the future. You can read and subscribe to the T4L magazine here, listen to their Virtual Staffroom podcast and keep up to date on technology in education news via the T4L blog.

Return to top of page Back to top