Technology
Season 4 Episode 5 - Human And AI Collaboration
In this episode of the Futures in Digital Learning podcast, Tanya Means joins Adam Davey to explore the dynamic collaboration between humans and AI in higher education. They discuss the importance of ...
Season 4 Episode 5 - Human And AI Collaboration
Technology •
0:00 / 0:00
Interactive Transcript
Speaker A
On this episode of the Futures in Digital Learning podcast, we are joined by Tanya Means, founding partner and principal at Inspire Higher Ed, to discuss human and AI collaboration.
Speaker B
Well, for me, it means making decisions about what piece of a task should be best done by a human and which piece of that task should be best done by a machine or an artificial intelligence. One of the things that I think is really helpful is often when you ask ChatGPT or Claude or whatever, you'll ask a question, it'll come back with a response, and then it'll say, would you like me to do this? Would you like. And we need to talk about bias in this, in the models we need to talk about. You know, if I say I want a picture of me as a CEO, I have to be, you know, in a suit in an office. I mean, there's CEOs that don't wear suits and they don't go to offices.
Speaker A
Join us as we talk about how humans and AI interact and the benefits and challenges of working together with AI. Hello, and welcome to the latest episode of the Futures in Digital Learning podcast. My name is Adam Davey. I'm an instructional design manager here at the University of Arizona. I am joined by a friend of mine, Tanya Means, the founding partner and principal at Inspire Higher Ed. Tanya and I have copy collaborated multiple times through olc, and I'm very excited to have her on the podcast here. Tanya, welcome.
Speaker B
Thanks so much for having me, Adam.
Speaker A
And you know, I will say one of the reasons that I. I'm. I'm very excited to have you on here today, Tanya, is because I just started reading your. Your blog that you started recently, which is called the Collaboration Chronicles. Am I getting that right? Yeah, yeah. And it's about human and AI interaction. And I've enjoyed every piece that you've written. In fact, before recording today, I was just kind of going back and looking through again and getting inspired for this conversation. But what's. I guess to kind of start us off, what does collaboration between humans and AI look like? And what does that. What does that kind of mean?
Speaker B
Well, for me, it means making decisions about what piece of a task should be best done by a human and which piece of that task should be best done by a machine or an artificial intelligence. And it's really inspired by something that I've thought about for a very long time, long before we had generative AI, much in the past past, when we had even just educational technology tools that we were trying to implement into our teaching in thinking about, well, which things should be done by some software and which things should be really the strength of humans. And so that's really what this collaboration chronicle and my writing is about, is looking for the places where we, we need to lean into the humanness of, of our interactions. And sometimes that can be messy and challenging, but so much, so much fun and rewarding. And then some things that we do that we think that we need a human to do, we really don't. And we're actually, they're actually better done by a machine. And so trying to, to make the best use of, of every service and every time.
Speaker A
Okay, so how does that, what does that look like in the world of higher ed and, and specifically in online learning? How do, how do we kind of adapt to that new way of thinking?
Speaker B
Well, for me, what it means is actually thinking about what we're doing. I know many of the things that we do, we get kind of in a. I, I won't say it's a rut, but it's in a routine and that's okay. It's manageable for our lives. To get into a routine of. I meet with a professor, I talk to them about their course, we brainstorm some ideas, I go and do my thing, I build out some content, I go back to the instructor. There's a flow to how we typically do our jobs, and in most cases we are just doing them because that's the next step in the flow. Whereas if we were to sit and think about that process, maybe even map it out, draw a picture of it, if that's what it takes. But think about what is, what is the, what are the inputs for each of those points? Where is the value that we see out of it and what are the outputs? And then say, well, is it best to keep it. This process, Is this the right process? Is this the right timeline and flow? Or are there things that we could do differently, think differently, work differently, Are there things that I can outsource to either other people or to other tools? And really just being open to reimagining the process.
Speaker A
I love that term that you just used, reimagining the process. I think that is, that's something that, to me, AI gives us an opportunity to do, to reimagine things. And I, I'm a pretty imaginative person, so I'm always trying to, you know, look at things from a, from different perspectives and, and think about how we could, how we can improve, do things better, look at things like that. So, you know, I, I love that idea, that concept of like, well, yeah, how, how can we reimagine. How can we make things better in that sense to look at what, what the possibilities are? Because right now, like, we're, we're faced with this idea that. Which is kind of scary, I think, to some people.
Speaker C
Right.
Speaker A
Is like, there's a lot of possibility for what can happen. And I think that is. Can be scary. And that's where, like, the hesitation is with, so, you know, what would you say to someone that is, you know, maybe afraid to take that leap and saying, like, I don't know if I want to, like, jump into this world of AI and start using it more because I'm afraid of what, what might come of that.
Speaker B
Well, one thing we have to recognize is that AI is here.
Speaker C
Yeah.
Speaker B
There is no putting that toothpaste back in the bottle.
Speaker A
Right.
Speaker B
The tube. The genie's not going back in the bottle.
Speaker C
Yeah.
Speaker B
You know, it is here.
Speaker C
Yeah.
Speaker B
And I believe that for those who don't address how it relates to them, they're going to be swallowed up in it. It's going to become overwhelming. They're just. Yeah, they're, they're. I don't know if everyone will be replaced. I think that there will be some job loss and some job changes and. But the same thing happens through every aspect of history. When we think about innovations, things change. I mean, we don't have horses wandering down our streets anymore, and that's okay. The horses are doing other things, you know, so. So I want to make sure that people recognize that there isn't a going back. We. We have generative AI and it is making a significant impact on all of our lives. How that relates to my own personal or your own personal, or each individual's own personal journey of how they enter that world is that if you first think, what do I love the most about my job? Or what gives me the most sense of purpose, the most sense of value? And I'll bet you anything that, that for most people, it's not the minute tweaks of a spreadsheet or the, the formatting of word to be able to produce a handout or a document. Those kinds of things aren't the things that make us feel really good about ourselves or about the work that we're doing for, for humanity, but if we can articulate what is, what is it that brings you the most value or the most joy and then look for either tools that help you do all the other stuff or ways of thinking that help you get to the other stuff faster, you know, maybe you still need to create that spreadsheet, but you can create it as a first pass with AI and then come back and tweak it.
Speaker C
Right.
Speaker B
You know, so. So if we can first articulate what is it that brings us joy, what is it we love, and then recognize where technology can help us do more of what we love and less of the minutia that bogs us down and makes us feel like, gosh, do I have to go to work today? Kind of feeling.
Speaker A
That's. I. I've never thought about it like that.
Speaker C
I like that.
Speaker A
Like, yeah, giving us more time to do more of what we love. And I think, like, you can. You can kind of spin that with students, too, as a tool. I just. In my class that I teach this past week, we talked about their passions. What are they passionate about, and, you know, how do they kind of fuel that passion while being a student and how they can incorporate that into their academic journey, even if they're taking a course that's not, you know, in their major that they love or whatnot? Like, how do they. How do they do that? So I feel like that kind of fits in a little bit with what you're saying. Like, oh, yeah, let's. How can we use AI to. To give you more time to kind of fuel that passion and do what you love? And that's. That's a.
Speaker B
And I love that you're addressing that with your students, because they are coming into a world where potentially, if we play this right, they could have careers that are more focused on fulfilling their passions.
Speaker C
Yeah.
Speaker B
And so having those conversations while they're students, I think is really valuable.
Speaker C
Yeah.
Speaker A
Thank you. And I. I do think it's important. I think you're going to get more. You're going to get people. These students are going to come into the workforce and they're going to become, you know, more contributing members of society. And you want them to be able to be happy and excited about the work that they're doing and whatever it is that they're. They're adding to our society. And so that's. And. And to also kind of stoke their creativity a little bit.
Speaker C
Right.
Speaker A
Like, they. They have cool things. You know, I. I challenge them. I say, like, you know, I don't know how you're using AI. Like, tell me, show me what. What are you doing that maybe I'm not seeing that you're saying, like, I think I can use it for this assignment. Like, don't try to hide it from me. Show me what it is so that I can learn from you. Just as much as you're learning from me in this course.
Speaker B
And I think having that conversation about where they bring value that can't be found in AI, so making it okay for certain activities or certain tasks that they do where they're not using AI, and that's okay too.
Speaker A
Yeah, you know, I, I think that's the, you know, we, we talk a lot about in my role, you know, policies and, you know, what they should, you know, how instructors should build assignments to either encourage AI use or discourage AI use or, or things like that. But, but we don't talk about as much of the creative piece with it, and as far see where this goes, and let's give, you know, some students a little bit more autonomy in what they're doing. And, you know, I, I, I don't know, like, how, like, how, how do we kind of have those conversations or talk about that more in terms of, like, this is, you know, not the black and white of like, this is good and this is bad, but this is, like you said, this is here. And let's explore.
Speaker B
Well, you brought up the idea of autonomy. I, we know from learning science that agency and autonomy are a big part of student motivation. Right. So, yeah, they want to learn if they feel like they have some control of what they're learning or they feel like they have some agency to do things in their way.
Speaker C
Yeah.
Speaker B
And we still need to have learning outcomes that we are working toward. We still need to have quality expectations for their workload, the work products. We still need to have expectations around how much effort they put into the work. And we still need them to go through some messy struggles in order to learn all of those things are valid. So then we start talking about, well, how do we give them more autonomy or more agency? How do we ensure their cognitive effort? How do we help them to value the struggle that comes with learning? And I think a lot of it just comes down to conversations, transparency, talking with them. You're doing that with your students right now. But also there's, there's an element of accountability, whether that's you and I being accountability partners, where we're like, you know, we're experimenting, experiencing something, let's talk about it and compare notes and, and hold each other accountable for certain levels of, of expectations around how our work turns out. But it could also just be, you know, how do you see the relevance of what you're learning in class, applying to you your future and the value that you will bring to society? And I think a lot of people, whether they're traditional, more traditional type of students in the 18 to 22 range right now, or even people who are returning to school to, to learn something new, to be able to reskill or upskill or change their, their trajectory in their lives, they all can see relevance, Right? And if you're giving them a task and they see no relevance, then the motivation for them to actually do it well and put their own investment into it is going to be less. Whereas if you can say, I know that you're going to be sitting at your desk, your boss is going to walk in and hand you a mess of data and say, make a presentation that I can give to the board that's so much more relevant than if they were to. This will never happen. Walk in and hand you a scamtron and say, fill in the bubbles.
Speaker A
Right.
Speaker B
It's just not going to happen. And so we need to make, I mean, we've talked for a long time about authentic assessment. It needs to be real, it needs to be relevant, it needs to be applicable to what future is going to be. And, and not every learner knows what their future is going to be. So maybe part of it is an exploration process of helping them to figure out what they'll actually do in that job.
Speaker C
Right? Yeah.
Speaker A
And I think that's the relevance there comes in not necessarily in the content, but the skills that they're building. Because like you said, you know, AI is not going anywhere and it's, it's gonna touch, you know, everything that we do. And they're, they're gonna need these skills, these kind of digital literacy skills. Whether that's AI, whether that's, you know, using multimedia in some way, you, you know, doing, doing these different things that are going to come into play in just about, you know, kind of every role that they could possibly get into.
Speaker B
And the fun thing is you can actually use AI to do that exploration.
Speaker A
Right? Right.
Speaker B
So you can go to one of these LLMs and you can say, tell me what this particular job is going to be like in five years. And it's not going to be exactly on accurate, but it's going to do a pretty good job of helping you think through the process.
Speaker A
Mm, yeah. And that's. And, you know, and that, you know, kind of brings me to another point. In one of the articles you talked about, about how your son was kind of using AI and able to ask a lot of questions and not have that fear of like, oh, am I doing too much? And having this conversation? And I, I love that because I think, like, yeah, that is powerful.
Speaker B
Well, and it really got me to thinking about how much has society kind of shaped kids these days into not asking questions, not being curious, being, you know, whether it's parents who are overly pressured to do lots of things and to get them to lots of events and activities, or teachers who are like, yeah, I've got so many students, I really can't just stand here for 45 minutes and ask, you know, have you asked me questions?
Speaker C
Right, right.
Speaker B
Even those, even those of us who have a great deal of care for that individual or want them to feel like they can just ask us questions forever. That doesn't actually really happen in, in most cases. And so, I mean, my son at one point had even told me, well, I don't want to ask Google that because, you know, I can't even remember how exactly he phrased it, but it was something like, I don't want to be a bother. It's like, how are you asking a question to Google being a father? And he's like, well, but I might have multiple questions. And then it's like, well, now he's got something that he can just ask and ask and ask and ask, and it never comes back. And was like, okay, would you just get it already? Yeah, it's just, It'll. He can just keep asking the questions. Yeah, that's okay.
Speaker C
Yeah.
Speaker A
And, and for, you know, and sometimes I know, you know, especially in the higher ed setting, like, students don't know where to go to ask these questions.
Speaker C
Right.
Speaker A
Even like if, if they say, like, go to go to a tutor, it's like, well, finding a tutor, like getting those, like, what do I ask the tutor? Exactly.
Speaker B
Yep, that's exactly right. In fact, my daughter said that to me the other day. She's. Because we were talking about a really, you know, comp. Complex idea. And, and I said, well, how come you've never asked this question before? She's like, I didn't know how to ask it or even that I needed to know it.
Speaker C
Yeah.
Speaker B
And so it's, you know, I think, I think that questioning and that the, the. One of the things that I think is really helpful is often when you ask Chachi PT or Claude or whatever, you'll ask a question, it'll come back with a response, and then it'll say, would you like me to do this? Would you like. And it gives like, more options, almost like coaching built right in. Right where it's like, would you like me to lay this out as a side by side? Would you like me to you know, so it gives more hints of ways that you can explore a topic further.
Speaker C
Yeah.
Speaker B
And at some point you're like, okay, I'm done. And you can just walk away and it get feelings hurt because you're done.
Speaker C
Right.
Speaker B
Which is nice.
Speaker C
Right.
Speaker A
Or like, if something comes up, like, you can step away and come back to it and it's going to pick up where you left off. Yeah, I, I was doing something for work this past year and I went on to Cloud and I asked questions and it gave me a wireframe for a potential website. And I didn't, like, think about that. I didn't ask for it. It just kind of took what I was doing and it did it and it spit this out. And I was like, oh, that's. That's great. Okay, this is helpful. All right, well, now let me ask, like, can you adjust this and move this here? And like. And so, yeah, I did give, like, help me see things in a way and go down a different direction that I didn't expect to go down.
Speaker B
I also, like, when I get a response and I'm like, I don't really. Two things either. I don't really understand what you're asking, you know, what you're suggesting. Like, I don't have enough knowledge to know whether or not that's right. And being able to come back and say, can you explain that to me further? Can you tell me what this particular reference means? Can you help me to understand why you suggested this? So that's one track, but the other track is coming back and saying, that's not my view, that's not my perspective. Can you reframe it? Given this?
Speaker C
Yeah.
Speaker B
And it will rewrite it. And then I'll be like, oh, that's closer, but not quite. What about this? And I can bring in different ideas that further enhance that idea in a different direction.
Speaker C
Yeah.
Speaker B
And nobody's getting their feelings hurt or being offended or, you know, I'm saying, you're walking all over my thing. Or it just allows me to flex and shape until I get something that's that. That I really buy into or that I really like.
Speaker C
Yeah.
Speaker A
And it's still using your, Your ideas and what your, like, your thoughts to. To kind of help you shape that.
Speaker B
Yeah. Because the first really cool thing it put out was crap.
Speaker C
Right.
Speaker B
It's like, no, that's not what I want.
Speaker C
Right.
Speaker B
Do this. What about this? I mean, I, I will sit and go through probably, you know, 70 different iterations with Clog sync. No. Fix this. No. Change this. No. Do this, none of this. And it's fine, you can do it.
Speaker A
Yeah. And that's, that's nice to have that. You know, you're not, you're not sending it back to the same person over and over again saying, okay, like, look at this next draft. Like, can you revise this? Like, how, how does this sound? Yeah, you know, which is a lot. So I guess, okay, so that kind of leads to another question is, do you envision a time when AI could co teach courses alongside humans?
Speaker B
I think it is already. And here's why I say that if I as an instructor have a course that I built out and I have students who are in that course, and those students are using generative AI in some way to learn or do activities in that course. I read an article and I can't remember, I'm sorry, where exactly it was, but it was basically an article that said your students are already rewriting your content. So if they don't understand something you're saying, they're going in there asking AI. Whether they're asking Google or they're asking Chat GPT. They're getting an AI response, right. To better understand what you're asking them for. If they feel like what you're giving them isn't enough or is too much, they're either summarizing or they're getting additional insights through a tutor or something like that. So. So I think without officially stating I have a co teacher who is an AI, it's already happening. Okay, now do I think that we'll have courses or content that is officially, you know, Tanya means instructor and Biff the AI as co instructor. I don't know if we'll, I don't know if we'll ever get to that place. But I do think that there is a value in considering what that would look like and why we would do it. What would be the value? Well, the value could be that, you.
Speaker C
Know.
Speaker B
It'S bringing in expertise from so many different sources that I don't have time to go to, to, to, you know, conglomerate all that together. That's not the word I'm looking for. I can't think of the word at the moment, but, you know, am I going and curate. There's the word I'm thinking of.
Speaker C
There you go.
Speaker B
So am I. Am I looking for a tool that can help me curate information to provide to my students? And then I'm the face that builds a relationship with them, helps them to have that motivation to learn, encourages them. But then maybe I'm also using some AI type tools to give them faster feedback or to help the content be more accessible, or to give them space, practice, you know, those kinds of things. I think they all bring value to the learning process. But I don't know if that then means that I actually have to have Jill Watson as my, as my instructor that everybody thinks is a person but isn't and then finds out later and is disappointed.
Speaker C
Right? Yeah.
Speaker A
That'S interesting.
Speaker C
Right.
Speaker A
Like this idea that it's kind of already happening without officially happening.
Speaker B
But so, so let me, let me suggest something here. If I talk to 15 of my colleagues and each one of them brings up an insight that I think is valuable, and I write it down by hand, and then I take all of those notes and I use them kind of to spark my own thinking. And I, I write an article based on my conversations with those 15 colleagues and would I then list all of the 15 colleagues that I talked to by name and specific situation in saying, you know, I wrote this article based on these 15 different conversations, and here's what I thought based on what so and so said, and then I connected to that. We wouldn't do that in most cases.
Speaker C
Right.
Speaker B
But if I have a conversation with an AI that is bringing together 150 different ideas and I'm prompting it and going back and forth and thinking about it and writing something, is there an expectation that I would then cite that AI, or is there an expectation that I would go and try to track down those 150 sources that AI brought together? Or is there a recognition that there's really nothing new in the world and that everything we say and everything we do is based off of inspiration from somewhere, from someone, from some conversation we had, from some experience we had. Maybe I experienced something 15 years ago that all of a sudden connected to something that I've done. You know, we just, we need to have, I think, a recognition that everything comes from somewhere. And in most cases, it didn't come purely from my own head.
Speaker C
Right? Yeah.
Speaker A
I mean, I think, you know, I could say the same for, for course design.
Speaker C
Right.
Speaker A
You know, when I'm designing a course, I'm, you know, I'm asking my colleagues, like, hey, what would you do, you know, with this assignment? Or like, we're trying to, like, figure this out. What tool would you use? How could you do it? Like, and again, every, every course that.
Speaker B
You have developed in the past is leading you to what you expect this current course that you're building should look like, or how it should function, or what service it should offer or what value it brings, you're not going to go and at the bottom of that course, say, built off of the 150 other.
Speaker C
Right, right, Right.
Speaker A
Exactly.
Speaker B
As humans, we can't not take into consideration all of the influences around us.
Speaker C
Yeah, yeah.
Speaker A
So kind of touching off of that idea of, you know, influence what, Thinking about myths surrounding AI in education. What is, what is a myth you would like to debunk if you could.
Speaker B
Well, there's. There's a few. One is, I think. I think there's an expectation that at some point, a teacher educator mentor is no longer going to be valuable. And I completely disagree with that.
Speaker C
Yeah.
Speaker B
I mean, Sal Khan in his book had a perfect line. I even posted about it on. On LinkedIn because I. I loved how he said it is like, teachers are only going to become more valuable.
Speaker C
Yeah.
Speaker B
So. So I don't think we're writing educators out of the equation.
Speaker C
Right.
Speaker B
So that's. That's the first one. The second one is I don't think that there's ever going to become a time where a learner can't learn something more. And, you know, I think in some cases, we've had an educational system that was, check these boxes, get this piece of paper, and then go do this job.
Speaker C
Right.
Speaker B
And that's the process. If we could change to more of an expectation of I'm always learning something new, I'm always gaining new understanding, I'm always seeing things in a different light, then learning just becomes. I mean, it already is, but we just don't recognize it. Learning just becomes a part of everything we do.
Speaker C
Yeah.
Speaker B
Living and breathing and learning, and they're just all kind of part of life. And so when we think about AI, when we think about expectations that people have around it, it's kind of. In some ways, I think some people have this. It's either the right. It's right or it's wrong. That's why we get into the concerns about hallucinations.
Speaker C
Right, Right.
Speaker B
You need to go and verify all these sources because AI can hallucinate. Or you see at the bottom of every thing from, you know, Gemini can be wrong. Chachi PT can be wrong. Clark can be wrong. Well, yes. So can people, you know.
Speaker C
Right.
Speaker B
And it's based off of people, so.
Speaker C
Right.
Speaker B
So everything can be wrong. And even, you know, now, two plus two is always four. And I don't care how many times ChatGPT tries to convince you that it can be five if you want it to be five, that's not true. But, but the idea of hallucinations and the idea of everything's not black and white, I hope we can at some point get beyond that expectation to recognize that there's a great amount of value of things being gray or things being not on or off.
Speaker C
Yeah.
Speaker B
But the tr. But transparency is really important there. So I, I posted about this on LinkedIn. It's like, wouldn't it be great if AI could come back to you and say, well, I can't find any articles specifically related to topic X, but wouldn't it be great if somebody wrote an article called this and instead of making it up and saying that, go find this article which doesn't exist? No, but that'd be a great article to write and a great area of study to do. And so maybe training AIs, AI platforms to be more able to be transparent, you know, to be able to say, well, there doesn't seem to be any information about that, but great, if it'd be great if it was. Or I've read this website and it doesn't say or. I can't find where it says this. Can you help me find it? Because I know how. There have been multiple times where I've said, use this website to articulate this thought and it'll say, oh, it doesn't say anything about that on the website. Yes, it does. Screenshot screen capture. It's right here.
Speaker C
Yeah.
Speaker B
Oh, I missed that. You know, it's like, so, so instead of trying to get these large language models to present things as hard facts, instead get it, to present it as well. I didn't see it. Can you help me?
Speaker C
Right, that kind of. Yeah.
Speaker A
And then, you know, have that conversation about our own, like, thinking process, right, to go through. And I, you know, I've kind of talked about this with, with people as, you know, when ChatGPT first kind of burst on the scene and all these LLMs, you know, started becoming more prevalent. And I said, you know, we, we went through this, this shift when calculators, you know, kind of came out, right? And people stopped having to do math by hand and they could do it on calculators. And everybody said, oh, we're going to ruin, you know, the way people think that it's going to be so easy for them to just do it, you know, do it on the, on a calculator. And then, you know, we kind of got through that part. And even then, like, I remember there's times where you would, you know, you could put, enter something, an equation in a calculator, one way and get one answer and enter it in a different way and get a different answer and, you know, and those type of kind of anomalies with the calculator as well.
Speaker C
Right.
Speaker A
And then, you know, Wikipedia came and it was like, oh, this is, you know, such ease of access to information. Oh, but it's, you know, you can't trust any of it because it's entered by, by humans and you don't know what's real and what's not and anybody can put anything on Wikipedia, but it's still, you know, was a valuable source of, of kind of information and that, that people went to and use and, and still use. Right. And so like, you know, kind of. And now this is the, to me, the, the next kind of thing. And not to equate all of those as, as, you know, being completely similar, but like, that idea of, like, there's always something that kind of comes around. And this obviously was going to have, you know, maybe greater impacts of, of what it can and can't do compared to those other tools. But it's a lot of, kind of the same types of conversations that surround it. And, you know, how do we adjust to this as a new part of our world and what are the impacts it's going to have?
Speaker B
And I think it's. You, you started to say something that, that I really, that really resonated with me in this idea of, of thinking, evaluating our thinking. I think we haven't maybe put as much into how we think. What we think.
Speaker C
Yeah.
Speaker B
Or where things come from or, you know, why I have this particular perspective or, you know, why your perspective is different from my perspective, or where there's overlap or where there's, you know, untenable differences. I mean.
Speaker C
Right.
Speaker B
We haven't really done that. And so if we're able to have tools that help us. Give me a different perspective, you know, give me a different idea. Give me three alterations of this. Maybe it'll help us to think more about thinking. I mean, yeah, from, from an educational perspective, pedagogical perspective, we've, we've learned about metacognition, thinking about thinking, but most people don't do that.
Speaker C
Right.
Speaker B
Like, why do I hold this particular belief or thought process? I don't know. It just is. Well, maybe we should think about. And maybe if we're not doing some of the minutiae, we can dig down into that and think a little bit more about it and talk a little bit more about it and that, I think that's valuable.
Speaker A
I agree. I, I completely Agree. And I think it's, these are good conversations to, to be having and, and good ways to, you know, I, I think look at the, the positive as opposed to, you know, what I kind of refer to as the, the Terminator outlook where it's, you know, we're afraid that it's going to take over and, you know, destroy all the humans and, and send us into an apocalyptic future.
Speaker B
Well, and then that's, that's actually an interesting point to bring up as well. For all of my positive and aspirational and hopeful potential for the future, there are definitely things that we need to talk about. There are dangers, there are risks, there are ethical concerns.
Speaker C
Right.
Speaker B
And we need to talk about those. We need to talk about bias in this, in the models we need to talk about. You know, if I say I want a picture of me as a CEO, I have to be, you know, in a suit in an office. I mean, there's CEOs that don't wear suits and they don't go to offices.
Speaker A
Right.
Speaker B
Or, or, or it makes you a male when you, when you're female or, you know, there, there are, there are definitely challenge and challenges and risks and even dangers associated with what could happen with AI. I really appreciate some of the work that anthropic is doing around those ethical conversations and trying to figure out, and others are doing it too. I'm not trying to single anyone out, but you know, when we want to use a tool to augment so much of what our society will be, we need to have those conversations and we need to evaluate is this right? Is this in the best interest of humans and not just in the best.
Speaker A
Interest of, of capitalism?
Speaker B
Well, not to say there's anything wrong with capitalism either, but, but recognizing needs to serve a good.
Speaker C
Right.
Speaker B
It needs to be good for society. It needs to be, you know, something that makes us better instead of just something that either is used for control or used for nefarious purposes or used to benefit once one group over another, you know, So I, I think it's important for us to have those conversations and I appreciate everyone who brings those up. I don't want it to be fear mongering though.
Speaker C
Right.
Speaker B
I don't want it to be like the world is going to end. Yeah, it's not. But yeah, I can see where you're afraid. And let's talk about your fears.
Speaker C
Yeah.
Speaker B
And how to address them.
Speaker C
Right.
Speaker A
And, and that's important to like, you know, to be able to have those open and honest conversations to address fears. But in a way that is productive and not, you know, like you said, not just fear mongering where we're, you know, just kind of dismissing it as, as something that, you know, can't happen for fear of ruining society. But, but it's like there's, there's positives and negatives for, for anything, you know, we could, we could do the same for, for anything that has come up. You know, air travel and, you know, and you know, going, going to the shopping at the grocery store.
Speaker C
Right.
Speaker A
Like, you know, all of these things have, have kind of these positives and negatives that we can talk about in different ways.
Speaker C
But.
Speaker B
Right.
Speaker A
So, yeah, I think it is good to, to continue having these conversations. That's why I'm grateful to you for being able to come on and talk with us on the podcast about some of these things and, you know, continue those conversations.
Speaker B
Absolutely. It's, it's always so fun to, to think about and talk about and collaborate with other people and you know, it always leads to more opportunities for more things in the future, so.
Speaker A
Agreed. Agreed. Yeah, so I think that's a good place to kind of end and we'll, you know, leave people with things to think about and hopefully we'll, you know, we can come back and have more of these conversations in the future as well.
Speaker B
Absolutely. Thanks so much.
Speaker A
Awesome. All right, thank you, Tanya. And for listeners, you know, please check out her blog on Substack. We'll post a link in the, in the notes for this episode. And yeah, thanks again for joining us.
Speaker D
The Futures of Digital Learning podcast is a production of the University of Arizona University center for Assessment, Teaching and Technology. If you have any questions, comments or ideas you'd like to share with our office, go to the Contact Us link on our website. Ucatt Arizona Eduardo Sam.
Topics Covered
human and AI collaboration
generative AI in education
collaboration chronicles
online learning
reimagining the process
student autonomy
digital literacy skills
AI in higher education
authentic assessment
AI tools for educators
human strengths in AI
creativity and AI
AI impact on jobs
AI for student engagement
technology in teaching
future of work with AI