157 - Diyi Yang: Socially Aware Large Language Models - Episode Artwork
Technology

157 - Diyi Yang: Socially Aware Large Language Models

In this episode of the Stanford Psychology Podcast, co-host Souda Karajah interviews Professor Diyi Yang about her research on socially aware large language models. They explore the intersection of te...

157 - Diyi Yang: Socially Aware Large Language Models
157 - Diyi Yang: Socially Aware Large Language Models
Technology • 0:00 / 0:00

Interactive Transcript

spk_0 Welcome back to the Stanford Psychology Podcast.
spk_0 I'm Souda Karajah, a pre-doctoral fellow at Stanford University.
spk_0 I'm excited to be joining the show as one of your co-hosts.
spk_0 For my first episode today, I'm so happy to share my conversation with Professor D. Young,
spk_0 whose research I've long admired and who has been an incredible role model
spk_0 since I took her human-centric large language models class.
spk_0 She is an assistant professor in the computer science department at Stanford University,
spk_0 affiliated with the Stanford Natural Language Processing Group,
spk_0 Stanford Human Computer Interaction Group, Stanford AI Lab,
spk_0 and Stanford Human Center Art Show Intelligence Center.
spk_0 She is also leading the Social and Language Technologies Lab,
spk_0 where they study socially a very natural language processing.
spk_0 Her research goal is to better understand human communication and social context,
spk_0 and build socially a very language technologies
spk_0 by a methods of MLP, deep learning, and mission learning,
spk_0 as well as theories and social sciences and linguistics
spk_0 to support human-human and human computer interaction.
spk_0 In today's episode, we discussed her interdisciplinary approach to research,
spk_0 along with her recent paper, Social Skill Training with Large Language Models,
spk_0 which introduces a new framework that supports making social skills trainings
spk_0 more available, accessible, and inviting.
spk_0 So without further ado, here's our conversation.
spk_0 Thank you so much for joining me. I'm so excited to be hosting you today.
spk_0 And let's start off by getting to know your research a little better.
spk_0 You are in the Social and Language Technologies Lab and your research rules are on human
spk_0 centric and socially a very natural language processing.
spk_0 In your own words, how would you define socially a very natural language processing?
spk_0 And why does studying its matter?
spk_0 So, great question, because I also think about this question all the time.
spk_0 We call the research we are working on socially aware language technology.
spk_0 Basically, it's about the study and the development of language technologies from a
spk_0 social perspective. The goal here is that we want to enable today's AI systems to better understand
spk_0 and respond to social signals expressed in language, and also the broader physical and social
spk_0 environment. So you can imagine that the socially aware systems can recognize social aspects,
spk_0 such as social factors, cultural, emotion, perspectives, etc.
spk_0 And more importantly, they can help us produce or process implications and meanings behind
spk_0 the language in the same way like human do. There are three dimensions I often use when I think
spk_0 about socially aware language technologies. So the first one is what we call social factors.
spk_0 If you think about the language communication between people for a specific message,
spk_0 it's not only just about the content, it's also about who is a speaker, who is a receiver,
spk_0 what's their social relation, and what the context, gathered by what type of social norms,
spk_0 culture, and ideology, and for what type of community goals. So, taking off together,
spk_0 we will have a much better understanding of a sentence or a message. And when it comes to social
spk_0 interactions, to me, it's more about the activities and the interactions human have within our
spk_0 existence. So this could include some of the broader organizational or cultural norms that have
spk_0 governed the interpersonal communication. The last dimension is what I call implication or
spk_0 social implication. It refers to the broader impact of the, an open system on society,
spk_0 including understanding both the positive and negative effects.
spk_0 How did you get started with this? Because there is some CS, there is some social sciences.
spk_0 How did you become interested in what you research now? And what would you say for some of the
spk_0 major influences and experiences along the way? Yeah, great question. I think
spk_0 I first get attracted to this social dimension as I moved to US to started my graduate school.
spk_0 Before that, I did a lot of training in computer science and vision learning. When I moved to US,
spk_0 I get a very fascinated by different types of language people speak around me. There are
spk_0 people from different countries, we all gather together to do graduate school together. You can see
spk_0 that there is a very strong difference when it comes to the language people use. The same
spk_0 expression might have been perceived differently and a lot of cultural factors there. That's the
spk_0 first time I really get excited about all these kinds of factors. And I realized that language
spk_0 or language technology is not only just about words, grammar and the sentences, it's also about
spk_0 people and what people mean when we use language to accomplish different goals, such as
spk_0 reading news or disconnecting with friends and families. In one of your lectures in the human
spk_0 centric, large language models class, something that stuck with me was that you kept emphasizing
spk_0 the data comes from humans, the model is produced by humans, the choices to build architecture
spk_0 are made by humans. And even the end product is for humans. Your work and you emphasize the
spk_0 human element at every stage of AI development or just any technological development.
spk_0 I'm just curious what could go wrong if we don't pay attention to these human biases infused
spk_0 into these systems and what can we do about it or just embracing it? How does your research
spk_0 play into it? Because we talked about the social aspect and I can't imagine a social aspect.
spk_0 We talked about human. Yeah, I think this is also why I wanted to create the course on humans
spk_0 in terms of large language models that are Stanford. If we take a step back, a lot of the topics
spk_0 here is not only about the technical solutions. Why share those you all data matters and data is
spk_0 produced by people and in the product and the system is still made for humans. I think one
spk_0 side we really want to make sure the problems we are going to work on the matter in diverse real
spk_0 world domains. And when I think about finalized education or even in healthcare, I think taking a
spk_0 real world impact into the first place to think about what the problems to work on is quite important.
spk_0 And going back to the data dimension, we briefly mentioned culture, language, etc. But if you
spk_0 think about that, they are not only just here at the top, it's worse, here are the images, all the
spk_0 use. It's also about the human preference, our cultures, biases, a lot of signals that
spk_0 embedded in data. So this is when the human factor or human centered flavor or perspectives
spk_0 will help us understand our data factor. And this is not only about data, if you think about
spk_0 training models, we have many different type of model architectures, etc. I think the researchers
spk_0 and practitioners play a very big role to decide how models trend, what type of data we use,
spk_0 and how we make tree doves between accuracy, efficiency, etc. And more importantly,
spk_0 around risk and all sorts of safeguard we need to think about. I think that's a very important
spk_0 space for many of us to think about. And if we do not pay attention to those human-centric aspects,
spk_0 I think the consequences could be huge and really consequential in many ways. For example,
spk_0 our systems may not be aligned with human behavior. And in addition to the system themselves,
spk_0 when it comes to people, you imagine a lot of issues around the trust or psychological
spk_0 impact of how AS systems may influence our behavior, all of those are very important today.
spk_0 And we really need to think about them as different layers so that we can avoid and prevent
spk_0 a feeders in communication, hallucinations, and other aspects. I think it's a really good time to
spk_0 go into your paper because as someone working at the intersection of confusedness and human
spk_0 behavior myself, I found your paper, social skill training with large language models. Very
spk_0 interesting. Where you introduce a large language model during a framework called AI partner AI
spk_0 mentor that makes social skills training more accessible. What made you recognize this as a
spk_0 critical gap that AI could help address? Yes, this is actually well my favorite work.
spk_0 I got excited about this topic for several reasons. First is that myself, I am not good at
spk_0 many communication skills. Part of the reason could be English is not my native language.
spk_0 In US trying to navigate the context here was quite a challenge in my first job here. So I always
spk_0 want to learn all sorts of social skills. And when I talk to my friends and students, I'll
spk_0 realize that learning these kinds of social skills is often out of reach for most of us.
spk_0 It's very time consuming. Let's see if you see, oh, I want to learn coffee resolution. It's very
spk_0 time consuming to do it. It's also very expensive if you are going to gather coach. Also such
spk_0 coach and their abilities might be limited. More importantly, I think many of those social skills
spk_0 practices them is actually psychologically and safe. So if you think about, oh, I want to
spk_0 negotiate or do this counter resolution about this topic with my roommate, with my boss.
spk_0 I think a lot of times people don't feel like they can easily open up. So I think the part of
spk_0 the reason is that I personally really want to learn it. The second thing is that I see a lot of
spk_0 the ways of how we build an evaluate AI today or large-inch model today is more about can those
spk_0 models do well on math, do well on codey. And I really want to see can they really also do well
spk_0 with social skills. When large-inch models get popular and the performance gets very impressive,
spk_0 I realize if we use them well, it's a great way to help the able very interactive training. Because
spk_0 one of the bigger advantage of large-inch model is that you can have conversations with
spk_0 these AI systems and you can do a lot of cheat chat. We can even roleplay different characters
spk_0 there with large-inch models. So I think it really provides the interactive and personalized
spk_0 training paradigm. To me, I think this would help transform learning in many, many and thinkable ways.
spk_0 So this is like how this kind of research started. And I think you mentioned our framework. It's
spk_0 called AI patender and AI mentor. I use it as more like a conceptual framework to think about how
spk_0 we use large-inch models to help people learn social skills. So think about you want to learn
spk_0 conflict resolution. Then you can actually talk to the AI patender. This AI patender might
spk_0 be a roleplay of your roommate. And then you practice different type of topics with this AI
spk_0 patender. And then the entire conversation is going to be coached by this AI mentor.
spk_0 So here, both AI patender and AI mentor are what we call large-inch model agents.
spk_0 One is that you can practice different type of conversations. And AI mentor is tailored in a way
spk_0 that we will bring in or build domain expertise into specific models so that those AI mentors can
spk_0 actually give you very realistic feedback. So this is like how I would describe this framework.
spk_0 And although it only has one AI patender, one AI mentor in the framework, you can actually
spk_0 use it in different ways. So think about if you want to practice how to talk to a group of people
spk_0 that you can have multiple AI patenders there. You can even have different AI mentors. Sometimes
spk_0 you may prefer, oh, what if my mentor has this type of personality or what if my mentor has this
spk_0 domain expertise, you can actually use it in very different ways. So when a user wants to develop
spk_0 a new social skill like talking to a roommate, it could be, or just having the interview,
spk_0 the AI patender, that agent guides them through a realistic scenario with a simulated conversation.
spk_0 And then we have the AI mentor which offers insightful knowledge-based feedback at key moments
spk_0 during the simulation. So we have this conversation partner and we have this instructor that is
spk_0 like overlooking the whole conversation. As you mentioned before, the communication isn't just
spk_0 about syntax or meaning or just like the language. It's also about intentions, emotions, and social
spk_0 cues. And you said that apart from all the other technical skills of large language models,
spk_0 you're excited to see these things. How does the AI partner AI mentor APAM system take these
spk_0 deeper layers of communication into accounts? How do models understand these subtle social
spk_0 contexts that humans navigate almost instinctively? Yeah, that's a great question. So what I can
spk_0 share is that there are different layers when we think about the development of AI partner and
spk_0 AI mentor in practice. So far, we have used it for a conflict resolution where we build a system
spk_0 called rehearsal to help people practice how to have a difficult conversation. And we also have
spk_0 the domain where we use AI partner and AI mentor to help novel counselors to learn how to be a
spk_0 good listener. Throughout those two different developments, first challenge even before we get to
spk_0 this like nuances or social awareness is actually to make sure the technology would work well
spk_0 on the first place. Dispect the fact that large language models are very powerful in doing roleplay
spk_0 et cetera, et cetera. Their simulation, like if you're going to use AI AMs to simulate your room
spk_0 or simulate a typical learning partner, that process may not be very realistic because
spk_0 models tend to create a curvature and they tend to amplify a lot of the attributes. If you
spk_0 tell them, okay, here is the specific person. So it's actually quite difficult to gather the
spk_0 simulation to be realistic. And then if you bring in or think about the earlier topic we had about
spk_0 more human centric perspectives. We actually first worked with domain users. So we talked to
spk_0 senior supervisors, we talked to novel counselors. We tried to work with domain users to create some
spk_0 port of tap because it's really difficult to simulate a specific individual. Instead of doing
spk_0 the simulation of the specific individual, we tried to create a typical port of tap for interacting
spk_0 on that specific skill. So here we are now talking about the specific individual. We are talking
spk_0 about a type of individuals who you can actually practice with. And by working with domain users,
spk_0 we are actually bringing a lot of the nuances. Like sometimes people may not be easy to open up. So
spk_0 that's a feature we will build into the simulation. We'll gather many, many of those. And then when
spk_0 comes to the implementation stage, we actually develop some techniques such as self critique and
spk_0 self improvement. So the key idea here is when the simulator, the AI partner is in the process or
spk_0 is trying to all the part of the sentence. We will try to let the model themselves self critique
spk_0 themselves. Like is this a reasonable response to use here that is aligned with all the domain
spk_0 knowledge, is it appropriate, etc., etc. So many of the contextual assessment will be achieved
spk_0 at this stage. So this is how we actually make sure that interactions could be realistic in the
spk_0 first place. I think the second dimension I would emphasize is this is not only just a process
spk_0 of building AI systems. We have greater theories when it comes to social skills,
spk_0 from communication field, from psychology field, and also from techotherapy field. So we actually
spk_0 use a lot of theories from those different research fields, try to see whether we could let them
spk_0 in the development process. So that a lot of the feedback are actually very grounded to users.
spk_0 So I think that that's the two challenges and approach we are working on these days.
spk_0 In terms of a lot of the cultural dynamics, a lot of the social awareness, we haven't figured out
spk_0 a way to integrate them into the space. I think that is self is an open question and I wish that
spk_0 our AI partner AI-adventure framework could include more of those cultural related practice in the
spk_0 future. I'm curious about the after production stage. You mentioned that there's still work going
spk_0 on and it's not perfect, but it's well known that evaluating models for accuracy and consistency
spk_0 is key and there are plenty of benchmarks that focus on more aesthetic, measurable aspects of
spk_0 model performance. But in the system you are talking about these models are set to have,
spk_0 for example, different personality is depending on the mode they're in and even do feedback on how
spk_0 empathetic or engage the person practicing it, for example, in the case of a therapist you mentioned.
spk_0 Skills that are, these are skills that are pretty straightforward for humans to do.
spk_0 So how do you plan to evaluate these human qualities, like having a personality,
spk_0 understanding emotions, or showing empathy and expertise? Are they measured using the same
spk_0 standards we would use for a person and if so, how do you even quantify something like that?
spk_0 I'm just curious because you mentioned using source field to build these models, but are we going
spk_0 to hold these models to human standards to the standards of these social fields so that we are
spk_0 actually evaluating them as if they're an actual agent, we are practicing it or that they're
spk_0 mentoring us. Yeah, very self-provoking question. I don't think there is a single standard or
spk_0 like one phase or answer, I think it really depends on case by case. So maybe let me just use a few
spk_0 examples to go through these. If you think about learning to be a good supporter, so you want to
spk_0 provide a social support and this is the skill that many of us want to learn how to be more empathetic
spk_0 etc. And in the practice stage, maybe you want to interact with different types of patients or
spk_0 clients with different personality. So in another situation, you need to have the personality
spk_0 really like invalid into the simulation. So then the evaluation would be as similar as what we do
spk_0 when it comes to evaluating humans' personality. So this is like scenarios where you want to
spk_0 blend them in. And similarly, I think if you want to bring in a lot of other aspects, such as
spk_0 oh, maybe this is the person who grew up in Brazil. Here is a person who has more knowledge
spk_0 about Canada, etc. I think a lot of the dogman expertise could also be introduced in similar ways.
spk_0 Evaluation is really like it's not an easy task. I think the focus is on the personality because we
spk_0 have great theories and frameworks so you can easily quantify it. But a lot of the social aspects
spk_0 are actually very fuzzy and not well defined. We don't even know how to operationalize them.
spk_0 So the evaluation will become something that's quite tricky and open-ended. And when we go
spk_0 to a little bit broader social construct, it's also a bigger spectrum because if you think about
spk_0 how to evaluate whether the simulation is a really good simulation of a person from Brazil.
spk_0 This is a really hard question because there are all sorts of different individuals, etc. There
spk_0 may not be a ground-to-answer that you can use compared to other situations. So for many of
spk_0 the evaluations here, we actually sometimes work with human experts. We ask, is this
spk_0 a simulation that look or sounds like it is difficult to you? Is this a good reflection of the
spk_0 client you had in your actions? So we actually developed a lot of these more human studies or
spk_0 evaluations to evaluate the such simulation. And I think a similar challenge exists when you
spk_0 want to simulate a culture. For example, in the context of therapy, if you want to learn how to
spk_0 talk to patients from different cultures, there isn't a single formula seen that, okay, this culture
spk_0 has this kind of prototype. So that's what makes sense very, very difficult today. And also,
spk_0 I want to just take a step back. I like your question. I think one thing I have an
spk_0 impression is that social skill training, the way how we think about it today, is this is a
spk_0 first stage training for people. It's not like, okay, as long as you finish this, you will be expert
spk_0 in the world. It's not like that. We imagine that especially for novels, for beginners, for people
spk_0 who don't know the field very well, they can actually use this kind of AI partner and AI mentor
spk_0 in the starting stage. So that they get a lot of practice, a lot of experience. And later,
spk_0 a lot of the social factors that we briefly mentioned earlier, I think you really need human
spk_0 connections and have this kind of real sessions to get to know it. If you can imagine being a good
spk_0 supporter does not only need you produce the best message there or the language there, it's also
spk_0 about how you behave, your emotion, your pose, how you do add contact, a lot of those. So I think
spk_0 it will be very, very cool to think about how to make social skill training in a physical world
spk_0 or a 3B space so that we can actually bring in a lot of those to help people learn in the earlier
spk_0 stage. If you're also okay with it, I want to dive into something you just mentioned because I
spk_0 noticed that in the paper too, you mentioned multiple times that you don't expect the system to
spk_0 just replace all these trainers or replace all these social skills training. You want this
spk_0 system to be a collaborator where you just introduce as a helper to the system, like the
spk_0 whole social skills training system. Can you please elaborate on that a little bit more?
spk_0 Yes, great question. I think first we want to build a system that can empower humans. This is
spk_0 like our true belief. And the reason why I mentioned that the social skill training AI partner and
spk_0 AI mentor can be used in the earlier stage, I think it's a great demonstration of that.
spk_0 So not only we want to provide assistance or feedback to people, we also want to, if you think
spk_0 about like who are the stakeholders involved in this ecosystem, we want to also help people
spk_0 who are providing feedback to others to help their job to some extent. Going back to our
spk_0 psychotherapy complex, actually senior supervisors need to provide feedback right now, like in
spk_0 classroom to nobos counselors. And sometimes you may only have a few experts and then maybe 20 or
spk_0 10 of those nobos counselors is actually very challenging for the instructor or for the senior
spk_0 supervisor to give feedback to all or to everyone in a very personalized way. So imagine at those
spk_0 moments this kind of AI mentor would be a great help to the people who are already helping others.
spk_0 So I think not only we want to help learners who want to learn those skills, we also hope that
spk_0 this tool will also be very helpful for people who are helping others. This is like actually
spk_0 reflected in one of our paper title called Helping the Helper from AI perspective.
spk_0 You mentioned that you want to see these models to succeed in all these social skills or
spk_0 all these human-like skills. So do you have in mind any use cases where you saw these models
spk_0 succeed? Where you can give an example of? Yeah, success is a big word. So far we have built
spk_0 systems to help people learn comfort resolution. The system is called rehearsal. So the idea is that
spk_0 you want to learn how to have a difficult conversation and then you can talk to our AI partner.
spk_0 This could be like your roommate or your boss and then you can have this realistic simulation
spk_0 of conversations. And then in the process it will allow you to explore counterfactuals. Like,
spk_0 oh, what if at this moment I use this sentence, this strategy versus that? What would happen if I
spk_0 use a different strategy? And then the system will help you to cast what would happen if you take
spk_0 the action one or action two. We actually leverage this theory called interest and power from
spk_0 comfort resolution. And so we basically try to introduce a lot of these pedagogical skills into
spk_0 the process so that people actually couldn't learn from this. The goal is never to create a 100%
spk_0 age replication of any individual. The goal is to help you learn how to deal with similar situations
spk_0 with a typical prototype. So this is the one system we have created. We did a study with around
spk_0 the 40 participants and we let them trial the systems. We evaluated how well they did before
spk_0 and how well they did after. We saw that people's knowledge about comfort resolution actually
spk_0 didn't change at all. Their knowledge didn't change. However, if you let them do an interactive
spk_0 comfort resolution, people who practiced with our systems actually did a match better compared to
spk_0 people who haven't used the system. This was very very surprising to us. But then later we realized
spk_0 that this is the difference between being a book smart versus being a strict smart. It's more
spk_0 about how you use the knowledge in interactive contacts rather than remember the concepts or knowledge.
spk_0 So that's about the comfort resolution with therapy training. We build a system called care
spk_0 where nobles counselors can actually practice with different types of AAP patients. We created those
spk_0 AAP patients based on the learning materials that counselors need to learn in their training.
spk_0 And we also build AAP mentors so that when they practice they can also get a feedback along the way.
spk_0 So this system we have contacted several stages of evaluation. So the first stage is we have
spk_0 invited around 100 care counselors to try out the system and we found that the system helped them
spk_0 learn critical skills such as empathy, reflection, session management, etc. And then right now we are
spk_0 also working with the medical school here at Stanford to look at how students are in their
spk_0 separate therapy training classes use care as part of their learning process to understand
spk_0 more about how people use this. It is useful and how can we better improve the systems.
spk_0 So I won't use the word of the process at this moment because it's still a very, very new and
spk_0 exciting direction and we are trying to understand and evaluate these type of systems from
spk_0 different dimensions so that maybe in the next few years we could build a working system that
spk_0 works well when it comes to real world. I'm not using the word success so these are the
spk_0 progress as you made with these systems and contrast what are some technical or social challenges
spk_0 for these systems that you see right now. Are there interactions the systems struggle with
spk_0 or are there certain skills the models can capture or are there any fields where people are more
spk_0 hesitant to use such systems? Yeah, great question. I think that there are many technical challenges and
spk_0 also many social challenges. If we start on the social challenges first one thing sometimes we
spk_0 worry is that this kind of practice may not give people very, very realistic practice. Some of our
spk_0 participants when they use our systems they share that oh sometimes the conversation feel like they
spk_0 are very strict and although it's not like we have the generative AI agent here called into the
spk_0 person so I think that's one thing that I'm kind of optimistic that with better and the more
spk_0 powerful models such simulation might get improved a lot and then the second kind of social
spk_0 technical challenge is more about it's more about this kind of scenarios that people may practice
spk_0 and then they may develop some kind of reliance there. If you always come to these kind of systems
spk_0 whenever you have any struggles or any questions you have let's see the system can help you very well
spk_0 but then overall we haven't had any studies on this topic yet but we worry that one of the
spk_0 issues is this kind of over reliance people rely on the systems and not realizing that this is
spk_0 not what the real world of metal look like like what do you practice here is just like when you do
spk_0 some experiments in your chemistry lab and then when you go to the real job it may be very different
spk_0 so there is this kind of over reliance and also this simulation to real world difference
spk_0 and I think it's very important to point out these two people so that from a social perspective
spk_0 people won't develop too high or too unrealistic expectation of the system and then other
spk_0 technical challenges I would say that so far the system admission is actually packed based so
spk_0 you chat with a patender and then get a feedback from the manager we can imagine that a lot of
spk_0 the social skills is actually more about talking or audio or how we behave so bringing additional
spk_0 modalities such as image audio so that would be some of the technical challenge I perceive here
spk_0 of course I think since the entire AI patender and AI mentor framework is able to
spk_0 put on large image models a lot of the interesting issues and the limitations with large image
spk_0 models still hold here for example researchers found that large image models tend to produce
spk_0 caricatures when it comes to social simulation or there may be culture biases or they not hallucinate
spk_0 when it comes to specific domain that they don't have a really good knowledge of or a lot of those
spk_0 kind of aspects so I think it's a great time for us to think about how to build technology by not
spk_0 only leverage the technical advances but also to think about a lot of the social implications around
spk_0 the space from the social implications part taking us all the way back to the beginning of our
spk_0 conversation what role can a socially a very large language model like this what role can this
spk_0 system play in improving human computer and also human human relationships how might it change
spk_0 the way we communicate with machines or like models like this one as you mentioned or could it
spk_0 change the way humans interact with each other as well how do you envision the future of
spk_0 this type of human AI collaboration particularly in the fields traditionally focused on
spk_0 human interaction like therapy education or like any system like this yeah I think this is a great
spk_0 question I think eventually we want to socially aware and opt to help with both human computer
spk_0 interaction and the human human interaction human computer interaction I think this part is pretty
spk_0 straightforward to myself sometimes when I think about it if you bring in more cultural contact make
spk_0 AI systems more aware of people's emotions impenetions perspectives empathy a lot of the
spk_0 social implications then I think that we definitely could make systems more capable of doing
spk_0 daily tasks or authors of tasks and that would facilitate our use of computers in many ways right
spk_0 so we could have smarter home device we could have more supportive chat about when we talk to them
spk_0 so I think the socially aware and LP and the human computer interaction is actually pretty
spk_0 straightforward to think about when it comes to human human interaction I would argue that
spk_0 the topic we just talked about on social scale truly is actually a great way of thinking about
spk_0 how socially aware systems could be used to help people learn better so that people could have
spk_0 better conversations or more positive conversations with others if we each learn how to have a
spk_0 different conversation maybe we will have a more meaningful relationship with other people in the
spk_0 ecosystem so I think that's like how we think about human human relationships I think this also
spk_0 goes back to a lot of the topics around what is the role of AI more broadly rather than just
spk_0 like socially aware AI when it comes to our everyday contacts what are the role of those
spk_0 technologies be I think I would argue that we want to make AI systems socially aware AI systems
spk_0 more like a support system more like a bridge for human human interaction the human computer
spk_0 interaction so eventually we want to make sure that they can or AI systems can be used to help us with
spk_0 better understanding better empathy and better collaboration with each other but that's a very
spk_0 exciting direction to think about and then I think going back to the education or therapy domain
spk_0 the human human dimension would be like if students could learn better about how to
spk_0 deal with certain kind of challenges in their learning trajectory if teachers could be
spk_0 facilitated with the AI mentor in terms of how to talk to kids better then we definitely use
spk_0 this kind of technology as a way to improve the student teacher relationships
spk_0 such systems could also make the learning more personalized and I think that this will have
spk_0 huge impact on society and a similar story with the whole of therapy as well where we can help
spk_0 not only just like novel counselors who are doing the practice who are doing the interaction we
spk_0 also help senior supervisors who are providing feedback who are trained novel counselors in those
spk_0 compacts so I think this is actually a very big space when we think about how socially aware AI
spk_0 how AI would be potentially used to help with human human relationships.
spk_0 Thank you so much for giving us the insight into a paper and this framework which is super
spk_0 exciting to hear about. Here I want to stop talking about this framework but before we concluded
spk_0 for people or professionals out there who are interested in such type of work what advice would
spk_0 you give them are there key skills or knowledge areas they should focus on or what are certain
spk_0 topics that you are planning to focus on right now? Yes I think this is an exciting area like all
spk_0 what we have attended today I think that this is a great direction and when I think about the
spk_0 skills I feel like I would encourage people to have more open mandate understanding and the
spk_0 mindset about the space it does not only require like social science or social insights it also
spk_0 require some kind of computational methods so ideally people may want to kind of typically
spk_0 training being both majors to get exposure to this kind of thing I know this probably sounds a
spk_0 lot I also want to emphasize that there has already been like a lot of great to interpretive
spk_0 courses not only offered as Stanford but broadly courses such as computational social science
spk_0 or like human centered LLP or human centered AI and many many other awesome courses like creative
spk_0 AI generate AI agents all sorts of things so I think with so many available resources today we
spk_0 are actually in a better position to start or pick up research in these directions in terms of
spk_0 big challenges one thing I realized is the more we work on the technical side or the more we work
spk_0 with a lot of the development questions the more we realize that by the end of day the go
spk_0 Huawei build technology is not for the purpose of building technology the goal is to think about
spk_0 how technology would help us help more with human touch help us with develop a better and more
spk_0 meaningful interactions with each other and if that's the goal then we should use those goals
spk_0 together what we are doing today versus creating the hammers and then look for problems of where we
spk_0 so I think this is one kind of thing that personally I have been trying to practice and
spk_0 learn how we could actually think of more from a human centered perspective one thing that
spk_0 we are working on and I think it's also very well aligned with this is to think about how could the
spk_0 human and AI systems really collaborate to create a greater collective intelligence so how could
spk_0 we facilitate a such collaboration not only just from algorithm perspective but also from interface
spk_0 perspective from implication understanding perspective like how could we better understand trust
spk_0 a human have towards systems how could we make sure that the systems we build would align with
spk_0 our preference and would serve the desire to goals we have in the beginning thank you so much
spk_0 and thanks so much for having this conversation too I feel like I can see the field from a better
spk_0 perspective because when people think about NLP's large language models all they think about
spk_0 technological development or how the systems work the GPUs the compute powers I think it's
spk_0 such a nice fresh bread to see that there are people who are working on actually integrating
spk_0 these systems into society interactions and make people just highlight their own skills so thank
spk_0 you so much for sharing your perspective with us thank you so much for having me thank you so
spk_0 much for listening if you enjoyed this podcast help us make even more people excited about science by
spk_0 leaving us a review on Spotify Apple podcast or elsewhere and subscribing to our North spam
spk_0 all fun softstack ads Stanford Cypod connect with other listeners you can also shoot us an email
spk_0 with your thoughts or suggestions at Stanford Cyp podcast at gmail.com thank you and have a wonderful day
spk_0 you