Edspresso episode 18: 3A Institute


How can educators help prepare learners to thrive in world augmented by artificial intelligence? And should education be preparing to disrupt technology design rather than being disrupted by it?
In this episode, now-retired department Deputy Secretary Leslie Loble speaks with Professor Genevieve Bell and Dr Amy McLennan from the 3A Institute (3Ai) at the Australian National University. Together, they discuss how education might help to shape technology design and how the education system can create and support the next generation of designers.
Genevieve Bell is the Director of 3Ai and a cultural anthropologist, technologist and futurist. Amy McLennan is a Research Fellow at 3Ai and brings knowledge of medical anthropology to work on the intersections of technology, society and wellbeing. Leslie Loble led strategy, reform and innovation in Australia’s largest and most diverse education sector for nearly two decades, until her recent retirement as Deputy Secretary.
The views expressed in Edspressos are those of the interviewees and do not necessarily represent the views of the NSW Department of Education.
SPEAKER: Welcome to the New South Wales Department of Education's Edspresso Series. These short podcasts are part of the work of the Education for Changing World Initiative and explore the thinking and ethical literacy skills students need in an AI future. Join us as we speak to a range of experts about how emerging technologies, such as artificial intelligence, are likely to change the world around us and what this might mean for education.
In what ways will artificial intelligence (AI) transform our ways of living, working and learning? Can thinking ethically and critically help our students shape their relationship with technology? This special episode features a conversation between the now retired Deputy Secretary Leslie Loble and Professor Genevieve Bell and Dr. Amy McLennan from the autonomy agency and assurance Institute, also known as the 3A Institute (3Ai) at the Australian National University in Canberra. This interview was conducted virtually due to COVID restrictions. Together, they discussed the importance of thinking critically and ethically about our relationships with technology, and how education and policy can help to support this.
LESLIE LOBLE:
I'm Leslie Loble, Deputy Secretary in the New South Wales Department of Education. Today, we are joined by Professor Genevieve Bell and Dr. Amy McLennan, of the Australian National University’s 3A Institute. I also would like to start this by formally acknowledging that we do meet, even if it's virtually, on different lands of the traditional owners and I pay my respects to Aboriginal elders past and present. Our guests are uniquely able to shed light on incredibly important issues that are facing us about the developments in artificial intelligence and how that's now transforming systems that we rely on across every domain of the economy and society and our personal interactions. And they bring a unique combination of deep expertise in those systems and in how humans engage with them. But particularly today, we want to focus on what are the implications for education, and especially around the ethical dimensions, the ethical reasoning skills that students will need to have to navigate through a very complex world. I think I might just start really by asking each of you starting with you, Genevieve, to just tell us a bit about yourself, but particularly how did you end up working in artificial intelligence? And what, particularly, keeps you engaged in this field?
GENEVIEVE BELL:
It's always a good question. So I'm a cultural anthropologist by training. I did my PhD at Stanford, I'm also the child of an anthropologist, I spent my childhood living with Aboriginal people on their country hearing their stories. And it's a really long way from Ali Curung in Central Australia to Silicon Valley in the United States, where I've spent my last 25 working years. And it's an even greater distance, ironically, from the anthropology department at Stanford University to the corridors of a big tech company. Although it was the shortest physical distance, it was the longest kind of both intellectual and professional distance. I ended up at Intel, which is the first place I encountered computational technology and digital technology at scale. And I've spent 20 plus years in that company, I'm still there part time. And my job has always been there to think about how do you take rich insights about human practice and use them to shape next generation technologies? This means over the last 20 years, I've looked at everything from Bluetooth and Wi Fi to early mobile phones, and Web Application and Application Programming Interface (API) Protection (WAAP). But for me, it's less about artificial intelligence (AI), in and of itself, and more about what it represents both as a technical system as something that human beings are going to have to encounter repeatedly over time.
LESLIE LOBLE:
Very good. Amy, it's great to have you as well. Likewise, just give us a sense of your background and how you ended up at the 3A Institute (3Ai) and doing what you're doing.
AMY McLENNAN
It's great to be here today, thank you. My background also starts in the Australian outback, although in a very different part of it. I started my studies in medical science and went on to work from there in policymaking and international development. At the advice of one of my mentors at the time, I looked into anthropology and ended up pivoting to medical anthropology. I went over to the University of Oxford where I ended up spending 10 years almost unlearning everything I learned about science and relearning how to think about the world from the perspective of a social scientist. I learned to think from the perspective of a number of different cultures and peoples around the world. Since then I have continued to remain affiliated with the University of Oxford. But I've also spent some time moving back into the policy world and into the consulting world. I returned to Australia four years ago to work with the Department of the Prime Minister and Cabinet on a number of complex policy issues that were cross agency. It was while I was there that a friend of mine called me and said, “look, there's this lady at the Australian National University and she seems to be doing some really interesting thinking about the future of technology”. I was really fortunate to be involved in running one of the lunch workshops of the 3A Institute(3Ai). And I discovered a really exciting group of people who are working on not only, emerging technologies, but thinking about how we build a future we all might want to live in, in the context in which these technologies will be in every aspect of our lives.
LESLIE LOBLE:
Genevieve, Amy sent me your lecture to the Australian Humanities Association, which was a fantastic lecture that you gave. And you start that by talking about how clearly technology is nothing new. And you tell, as an example, how fish traps were created out in Brewarrina, as well as elsewhere, that lasted some 40,000 years. 40,000 years of utility for human beings is pretty impressive for a piece of technology. It feels as though we might be stepping into something that is different about technology, in particular, when machines learn. What questions has that raised for us?
GENEVIEVE BELL:
It's a good question. So starting back in the 1950s, researchers in the United States proposed an agenda that they call the artificial intelligence agenda. And what they were advocating for, back in the 50s, was this notion that you could break down human tasks into small enough pieces that a machine could be made to enact those tasks. They imagined the things that would distinguish artificial intelligence from computing in that moment in time was that the systems would be able to handle abstractions. The systems would be able to do things that humans could currently do. For example: the systems would be able to understand speech and they would learn for themselves. That systems would be able to learn for themselves was considered the complicated prize.
In framing the problem that way, that an object could reason, think symbolically, understand speech and learn, they were attempting to make machines seem like they were going to become more human. The idea that you could program a system, that you could teach it to do the same thing over and over again, without having to make it do the same thing over and over again, is of course old. That's how we taught looms, with punch cards nearly 300 years ago. The notion that a machine could acquire information, make sense of it and replicate itself; that is, in some ways, the thing that feels a little bit different.
So, as you said, I gave a long lecture. At the end of last year, my love letter to the lift, where I was really interested in thinking about what would happen if you took something as mundane as a lift, a thing that goes up and down inside buildings, and gave it a capacity to learn an act without having to wait.
At the moment, if you summon a non-AI lift, you press a button, and the lift comes to you. The button press calls the lift; it is effectively a command and control infrastructure. In next-generation lifts, the ones that are running AI at their core, the lift has decided long before you press the button where it needed to be in the stack, inside the tall building. Frankly, it is anticipating your actions because it has been tracking actions of people inside the building for months, possibly years. And it notes that you, exit the building at about 12 o'clock, and so do all your friends and colleagues. And so, it has arranged all of the little lift carriages anticipating your call long before you press the button. It's that capacity to determine action without a command; that's the thing that feels different. Of course, it also triggers a set of regulatory, policy and principled questions about what would it mean to have machinery appear to make decisions, that historically required human oversight? Will those decisions be right? How often will they be reviewed? What are the consequences of those decisions? Do you need to have a human somewhere in that conversation? If so, what would that look like? And somewhere between the dystopian and utopian kind of narratives that frame our thinking about these objects, and the legislation, that doesn't yet make sense of them, is the place that we have to spend our time.
LESLIE LOBLE:
Sticking with the lifts for a moment, we're told that part of the reason we need to move to this system is because it's more energy efficient, and it has a good and positive purpose behind it. I suppose that's a key question, isn't it? How do we shape these new technologies to indeed serve better purposes? What are the sorts of things we need to do to shape it for positive and widespread benefit? Secondly, how do we even know if that's being achieved, if what sits behind it is highly technical and specific knowledge, particularly around algorithms?
GENEVIEVE BELL:
Well, those are two good questions. I think the first caveat would be, we need to be more precise in our language, about the difference between machine learning, an algorithm and artificial intelligence; those are different things. Some of them are subsets of the other. If you've used a washing machine, in the last 20 years, you've encountered something that had algorithms. As soon as you press the cycle that said, delicates, someone has made a decision about water temperature, spin, ringing and agitation, and sequenced them accordingly. So that was an algorithm. They're not new. They've been with us for a long time.
The idea of machine learning, for the most part, is just statistics. My colleagues in computer science don't always like it when I say that, but much of what machine learning is, is the application of long standing statistical methodologies: linear regressions, Bayesian analysis, notions about finding similarities and patterns are based on older ideas. What's different here and what's different with algorithms is the scale and speed at which we can now deploy them, and the tasks to which they're being input.
And then for artificial intelligence, it's the combination of those things, plus a sensor array that will know the world around it and garner more information plus some ability to drive action. And being clear about our terms is actually, I think, one of the ways we do a better job of working out how to answer your questions. The second piece, is thinking about what does it mean to imagine making some of the objects around us, or new objects, smarter? We need to be really clear about why we're doing it.
So, you're right to ask the question about the lifts. What was the intentionality in starting to use artificial intelligence inside lifts? Well, part of it was to do a better job of rationalizing where the lifts were, at any given moment in time, because lifts use considerable energy. Of course, the trade-off there is: in saving energy, the lift doesn't come quite as quickly as it once used to, because lifts are coming from different places in the building. The trade-off there is: what is the locus of evaluating the machinery? Is it that you have to wait 30 seconds more? Or is it that it saves an enormous amount of energy over the lifetime of the building? Well, it turns out as humans, we're not really good at making that trade-off. We're not really good at articulating it, and we’re equally bad at asking the question about what the trade-offs should be. So, thinking through the question ‘how do we evaluate what makes the technical system good?’, for me, at least means moving past some of the rhetoric, and evaluative mechanisms, of the last 200 years; these were usually about efficiencies and productivities. Now, we are starting to ask other questions about sustainability, energy use, safety and comfort; these all seem like good questions. We’re now in June 2020 – I imagine there are some other questions, we might want to ask about systems to about their fairness, equitableness and about the other mechanisms that they’re entwined in. What are these systems embedded in? And then your second question is, even if you can imagine what are the right metrics or indices by which you would evaluate these new technical systems, who would do that evaluation, is a second, really good question. And it's hard because much of what's going on inside these systems is deeply complicated. And so part of what, Amy and I, and the team, work on every day is trying to think through: what will it take to create a vocabulary and a set of skills that will handle both the questions about the intentionality, and then the questions about the complexity?
LESLIE LOBLE:
Amy, what skills do people need? What do students need to know?
AMY McLENNAN:
There's an opportunity here to flip one of the questions we've been asking, certainly in the education space for a while. The question is, how will technology disrupt education? And I think there's an opportunity here to ask how could education, disrupt, change or shape the technical systems that we're building? And if we think about things in that way, then we start a different conversation. Genevieve already alluded to some of what we're working on. In our current curriculum, we're working on thinking about what are the current building blocks of the technical systems around us? And how do we sharpen our understanding of those and talk about them accurately? How do we learn about those key building blocks? How do we learn to frame a series of questions around the way these systems come together, the dynamics that are here in them, and the way they speak to other sectors or areas beyond the computational? How do you go about working with people from different backgrounds, and both valuing what they bring to the table and being able to incorporate that into your thinking? With the understanding that you might not be an expert in that, and they might be.
There's a great deal of things like respect and curiosity. And I think, to the point about curiosity, I quite often ponder something that Bruce Pascoe wrote, not so long ago. He asked us to teach our children to doubt. And he said, it isn’t the kind of doubt that leads to inaction. But the kind of doubt that leads you to question and be curious about the world and what is going on and to seek to understand it, rather than just blindly trust, whatever is sitting there in front of you. And I think that there's something to that, which is really important. And I think beyond all of that, certainly something that I have taken away from working in the Institute, is thinking about how do we imagine a future we might want to be in? And how do we cultivate that sense of imagination and possibility?
LESLIE LOBLE:
So you're both making the case that it's important that people, including young people, come to this with a sense of not only curiosity, or as you put it doubt, but also a sense of agency. I hear that: it is very legitimate and important for us to control and shape technology, not to think of it as a ‘black box’ that only certain people have the keys to operate. How do we best develop and deploy a sense of agency?
GENEVIEVE BELL:
Well, I tend to think this is a conversation to have not just with our students, but their parents and their communities, too. For me, it is absolutely the case that our educational system, from Kindergarten all the way through, is going to require an orientation to technology. But it also requires an orientation to critical thinking, because it's not about what programming language you teach people. It's about how do you frame an introduction to the fact that there is a programming language? I'm of a vintage, as I'm sure some of your teachers and parents are, where I learned to program in high school, I learned Pascal and then Java and C++. I've been exposed to Python, I don't love it. But I recognize them all as ways of framing information. And I think part of it isn't about the particulars of any introduction to compute. It's about how do you frame the broader subject? And for me, that's a conversation that needs to be had across multiple sectors of the society, not just one part of it. But for me, the notion of the burden always being on youth is not actually a helpful answer.
LESLIE LOBLE:
No, a fair point, for sure. Your focus on critical thinking elsewhere in the work we're doing, we certainly have concentrated on what we call thinking skills as being the essential components. And while technology pushes us ever further out in the boundaries of what is possible. As we've explored this, the further we go, the more we come back to really core elements of what education always was, and always will be and offer a strong foundation, but particularly those capacities to think deeply about any aspect of the world.
GENEVIEVE BELL:
Well, and for what it's worth, you know, the conversations that we have at the Institute and the conversations that we have with large technology companies, for the most part, every single one of those companies, as they're thinking about what their next generation of work and workers will exhibit, the thing they most asked for is not actually the quote unquote, technical skills, but it's the critical thinking skills. And it's the one thing they all say they want more of. So, you know, to a person in those conversations, what I hear back from senior leadership teams is yes, they need to be technical, forget that. What they also say is, we really need people who know how to ask good questions, who know how to work in teams of people who don't share their practices. We want people who are committed not just collaboration and teamwork, but the success of others. And we want people who know how to drive a conversation forward and also inhabit ambiguity. And that's an interesting brief when you think about what does it mean to teach critical thinking or critical questioning? When I look at the arc of what will unfold in terms of affordances of technology, we know what we have in 2020 will be completely different than what we will have in 2025 and 2030. So how do we give people a way of making sense of the objects that are in front of them that isn't beholden to the ones they grew up with?
LESLIE LOBLE:
Amy, how do we need to start thinking about public policy, including what skills are needed?
AMY McLENNAN:
One thing we could think about differently in policy-making and in government, is to at some moments rethink the way we are approaching a particular policy or regulation or design of a piece of legislation or all the instruments of government. We have a tendency to think of them as solutions to problems. And there are some moments where it may be valuable to think of them instead as interventions in a system. For example, interventions in a system, like the lists that Genevieve has talked about, potentially interventions in other systems that contain all manner of human and environmental and technological components. When we think of these instruments as interventions in systems, it raises a couple of other questions. It raises that case to think about the unintended consequences of what we're doing in our specific space. And it also calls us to think about what is the future state of the system that we're trying to achieve? So instead of what problem are we trying to solve, what future are we trying to build? How do we bring in skill sets that can think about how an instrument might play out at the level of the society or community? How might we think about the different unintended consequences, whether they be economic, environmental, social, or health? And how we imagine possible futures? Which voices are involved in creating that future? And who might want to be a part of that? And I think, then, as an extension, from that thinking about how might we start to build this type of thinking right from school education upwards? So there's certainly space for thinking about an education system that encourages solving problems, and creating a right answer or a wrong answer. And indeed, we need that for all sorts of reasons. And alongside that, how do we also encourage moments of thinking about interventions and systems and complexity and possible futures? And how do we bring that into the fold as well?
LESLIE LOBLE:
Transparency is often also mentioned as a really key component of the guardrails when it comes to technology. What other guardrails do we need when we're talking about this? And particularly, do you think that there is a role for ethical reasoning? And by that, I mean, more than just what is good or evil, but the capacity to think through the ethical dimensions of the challenges that we're facing.
GENEVIEVE BELL:
Always a good question to ask an anthropologist, because what I'm going to then do is make that an even more complicated question than It first appeared. Because one of the things about transparency is that it's a cultural value, right? And it's not necessarily a shared one. So for me, one of the challenges when we talk about transparency is we sometimes talk about it as though that were a universally shared notion of goodness, as opposed to a cultural one. And I worry when we talk about it, that we forget that sometimes there are reasons why certain things are not shared, and the consequences of that. So, the thing about ethics is, you're right, it's not just about morality, and what is good and bad. But the thing about ethics is ethical conversations are contingent; what is ethical changes over time, despite the fact we might like to imagine that it's constant.
If we were to have a conversation in Australia, 50 years ago about what was ethical and what wasn't, a series of people who imagined they were deeply ethical human beings would have still advocated for the death penalty. They would have advocated against any form of public recognition of homosexuality, let alone the idea of gay marriage. They would have been unwilling to wrap their heads around truth and reconciliation with Indigenous people. And I would argue we still aren't there. And they would have made a case about immigration practices that might today seem backward. And at the time, those would have been people who regarded themselves as highly ethical. So, what it means to be a highly ethical person or to have an ethical framework, it's contingent, it changes over time, it’s cultural, it’s contextual, it has a history to it.
I think you can make an argument, there are some people who have the luxury of articulating it and others who are just living the consequences of it. So one of the challenges in the technical side is that many of my colleagues in large tech companies have asked for an ethical framework because they'd like to build it into the machinery. And that's always an interesting question; Did you want what was ethical in 2020? And how are you going to hardwire that? When in fact, most of the things about ethics and indeed ethical dilemmas are that it's as much about the unfolding of the conversation and the ways we need to change, in any given moment in time, than it is about a series of moral absolutes. And so the question then becomes, in some ways, who gets to frame that conversation? Who's in it and who's not in it? What are the consequences of violating or standing outside of that consensus? And how do we want to think about any of those pieces?
So, when we imagine having a conversation about ethics and artificial intelligence, my usual first question is, whose ethics? My second question is, who's having the conversation? My third question is, who's not in it? And why? And then the fourth question, in some ways is and what might be the consequences of either having a framework or not having a framework? And then in either instance, what are the consequences of not paying attention to it? And so, we sometimes talk about if we just had ethics in AI, it would all be okay. And I want to suggest that, in fact, even that question means that we are not really paying attention to the challenge. Because long before we should be talking about ethics, we should also be talking about regulations, laws and policies, all of which exist already.
I think, for me, and for our student cohorts and the staff at the Institute, the notion of what it means to be a critical thinker, is actually to be a critical question asker. Usually the starting premise is that by the time you've decided what the problem is to solve, or the question you want to answer, you've probably missed a series of other pieces. Now, of course, the reality is, at some point, you actually have to answer a question. But I think there is huge power in resisting the first question and making sure that a whole series of others are on the table too. Because in asking all those other questions, you start to be able to take a step back to that very first question about ethics. You might say, hmm, maybe there are some other things we should talk about too, or, as well, or instead of. For both of us, and the work we're trying to do, the notion of how you might build a future you want to live in and take a series of technical systems to scale means asking this increasingly sort of complicating set of questions to make sure that we don't inadvertently build something that no one wants to live with.
LESLIE LOBLE:
Both of you are painting a powerful picture of complex, interactive, constantly changing systems, and interrelationships at the human level, system level and technological level. And what you're painting is that, in fact, there's a tremendous need for lots of different bodies of knowledge, expertise and perspectives. I think that’s quite exciting when you think of an educational context. So computational reasoning and thinking are incredibly important. But they need to be coupled with a much wider, diverse set of skills.
GENEVIEVE BELL:
Oh, absolutely. And I watch with interest, my colleagues in the United States, in particular, where the big engineering schools and computer science schools are adding in humanities and social science reasoning. Getting degrees in those fields now requires you to go back and be in dialogue with these other pieces, because it is clear to the leading lights in various fields that they've been having conversations without all the right protagonists in the room. As I watch the re-energizing of the relationship across the academic disciplines and the various practitioner classes, I see those as being really hopeful things.
LESLIE LOBLE:
I'm going to finish with two questions. Amy, if you were speaking at somebody's kitchen table, and it was some students and some parents, what would you say about their future and what they should be doing with their learning?
AMY McLENNAN:
For me, I think it has to be a message about exploring widely. It's really easy, especially when you get to the pointy end of school to be focused on choosing your subjects, which of course, automatically narrows things down. And then within that, choosing a specific set of options that you might go on and study later, or a specific career path that you might choose to study from a vocational perspective. I think always exploring widely and potentially choosing one thing to learn about that is a direct contrast of where you think you want to go, is a really neat way of learning for the future. If you can talk about that over the kitchen table and debate it with your family and with your parents, just for fun, then that can be a really simple and exciting way to bring a little bit more colour and life and different way of thinking into your everyday studies, without it being a complete burden.
LESLIE LOBLE:
Genevieve, because you started with the family scenario, I'm going to ask you, what would you say to teachers? And indeed, what would you say to Mark Scott or myself or other leaders of an education system?
GENEVIEVE BELL:
Well, the most powerful thing I heard Mark Scott say, two years ago, when he was standing in the Carriageworks space, was about the fact that in 2018, he had to have an eye on 2030. And I remember looking at him and thinking it is the hardest job in the world to have to think about both the distant future and the immediate present, at the same time.
And so, I think about the hardest challenge that teachers and that the educational system has is that they are simultaneously in the immediate present and the distant future. You are having to constantly trade-off between the arc of 12 years of education, and the realities of what has to happen today. I think balancing that is really tricky, because it requires having an incredibly optimistic view about the prospective future, about which you have to be a little loose, because it's not going to be what you expect. But also know that you can't be constantly aiming at the prospective future, because there are things that need to be done today, tomorrow, next week, or the week after.
For me, the challenge about managing something that is effectively a future industry is about how you balance between a future that is always just tomorrow but in lots of pieces. How do you have a vision for something that is distant and something that gets delivered right now? And I think it's acknowledging that it is actually a really hard place to sit, between those two things. And then also working out how you maintain some kind of vision about what 12 years from now should be like, bearing in mind that Monday is always just right around the corner …
LESLIE LOBLE:
Well, that's a terrific way to end it. I thank you both. It was really a terrific discussion. Your insights, your knowledge, your optimism, your enthusiasm is infectious and really comes across and is very powerful and useful. So thank you very, very much.
SPEAKER:
Thank you for listening to this episode of the Edspresso Series. You can find out more about the Education for Changing World Initiative via the New South Wales Department of Education website. There you can sign up to our mailing list. Or you can join our conversation on Twitter @education2040.
Additional resources
Are you a NSW public school teacher who wants to know more about how to embed technology into your teaching?
The department’s Technology 4 Learning (T4L) team support teachers, students and schools with the best technology and advice to create classrooms of the future. You can read and subscribe to the T4L magazine here, listen to their Virtual Staffroom podcast and keep up to date on technology in education news via the T4L blog.