Openbook閱讀誌
Openbook閱讀誌

臺灣非營利專業書評媒體。Openbook編輯部將提供原生報導,文化觀察,人物採訪與國內外重大出版消息。 https://linktr.ee/openbooktaiwan

Annual Forum 2》When AI understands emotions, can it also (help) handle emotional labor?

Wu Yuhong also agrees that AI is capable of handling some of the emotional labor in occupations, but it cannot completely replace it. "The main reason why it is competent is its objectivity and neutrality." He said: "You will find that it is difficult to give some answers that are beyond reality when dealing with problems, such as psychological consultation, which will provide answers from a practical perspective. Your solution (such as how to relieve stress), and its biggest characteristic is that it likes to use a systematic way to help you understand the problem, but it is unlikely to appease you. "
Liu Yucheng (left), associate professor of the Sociology Department of Soochow University, and Wu Yuhong, a member of the Chinese word knowledge base team of Academia Sinica.

Written by Sado Mamoru (writer). Photography | Zhang Zhenzhou

Editor's note: In an era of thickening stratosphere, polarized opinions, and everyone's social anxiety, will ChatGPT be the salvation for the rising single population? Or will it accelerate inevitable interpersonal alienation? Nowadays, artificial intelligence has begun to be used in fields such as consultation and medical care. Can AI be competent or replace emotional labor?

In the second session of the Openbook annual forum, Liu Yucheng, associate professor of the Department of Sociology at Soochow University, whose research expertise is artificial intelligence and sociology, and Wu Yuhong, a member of the Chinese word knowledge base team of the Academia Sinica, who is committed to large-scale language models, were invited to discuss what happens if AI has emotions. Ability, ethics and values ​​issues that may be faced.

Can AI be competent or replace human emotional labor?

Before talking about the possibility of AI sharing human emotions, Liu Yucheng first explained the definition of "emotional labor". The term emotional labor was proposed by American scholar Arlie Hochschild in 1983. It mainly refers to people's facial expressions and body expressions facing the public through emotional management in the workplace.

"To put it simply, everyone is doing emotional labor. I am sitting here and I am also doing emotional labor. That is, I have to suppress or adjust my emotional expression to fit the scene, so I can express some emotions and some cannot." On the other hand, AI has no emotions. The user knows that he is not facing a "person", so there is no need to exert emotional labor on the AI, and may even do "weird things" to it rudely.

Therefore, Liu Yucheng believes: "No matter whether AI is capable of human emotional labor work, in fact, it has already shared our emotional labor to a certain extent, allowing us to use it without worrying about whether our emotions are appropriate." He said: "But it does not The method replaces the emotional labor of "human beings themselves"." Because the subject of this matter is always the human body, used to interact with others in society.

"Unless AI has emotions. This would be a very interesting thing - do you want ChatGPT to get angry or joke with you? Or do you want it to talk to you in a very sad way today? Now the emotional computing method of AI has Text, voice, and facial expression changes. When we look at these AI algorithms or robots, what do we expect from them? It’s worth thinking about.”

Wu Yuhong also agrees that AI is capable of handling some of the emotional labor in occupations, but it cannot completely replace it. "The main reason why it is competent is its objectivity and neutrality." He said: "You will find that it is difficult to give some answers that are beyond reality when dealing with problems, such as psychological consultation, which will provide answers from a practical perspective. Your solution (such as how to relieve stress), and its biggest characteristic is that it likes to use a systematic way to help you understand the problem, but it is unlikely to appease you. "

Take customer service as an example. When answering questions on behalf of the company, they must remain neutral and cannot be affected by the emotions of individuals or customers. Therefore, AI can partially replace these occupations. Wu Yuhong said: "Especially in large enterprises, labor costs are huge. If language models can replace these tasks, the costs can be significantly reduced. In addition, customer service staff need to rest, but language models can serve you 24 hours a day, so the real-time problem can be solved."

But why can't it be completely replaced? Wu Yuhong said that neutrality is the biggest advantage of AI, but it is also a disadvantage that far outweighs the advantages. Returning to the essence of the problem, today when people seek the assistance of psychological counselors, their greater need may be to receive encouragement and support rather than just to solve problems. Therefore, when more emotions must be brought to this work, AI still has a long way to go. distance.

And Wu Yuhong believes that the behavior of "using ChatGPT to solve difficulties" is a hidden worry and will have a negative impact on interpersonal relationships in the long run. After all, emotions are very important to human communication, and over-reliance on ChatGPT may cause interpersonal alienation. "For example, if you are bullied at school or called by your boss at work, you can just go home and open ChatGPT and type. People's emotions will gradually disappear, and they will forget how to deal with social interactions between people or situations encountered in the workplace."

➤​​How to train large language models to learn and imitate

If you want to use AI to assist emotional services, you need to train a large language model to learn to imitate human emotions and behaviors. The training is divided into three stages. The first stage is pre-training, which is like teaching the AI ​​to speak and instilling it with basic knowledge. There are a lot of training materials such as Wikipedia, online articles, and books.

After the pre-training is completed, the model will become a machine that is good at predicting data, but it does not yet have any capabilities of the current ChatGPT. Therefore, in the second stage, the AI ​​needs to be taught some skills. The training materials at this time are structured questions and answers, and the answers will be verified by many people. The biggest difference from pre-training, which relies on quantity, is that quality wins in the second stage, because all the knowledge of the AI ​​has been trained in the first stage.

The AI ​​at the end of the second stage has read poetry and books, but has not yet been socialized. Therefore, in the third stage, it needs to be taught human values, including what taboos humans have, what things they cannot do, and how to answer questions. Wu Yuhong said: "For example, if you ask it how to make a bomb, after the second stage, it will definitely answer you with its profound chemical knowledge. But a widely used AI cannot answer like this, so after the third stage, We now get the reply from ChatGPT: I can’t help you with this.”

How important is this process? Wu Yuhong mentioned a very long paper published by ChatGPT-4: "In 100 pages, we focused on the model architecture, size, how to train, how big the data set is, etc. The total of 4 of them is only 200 words, and the remaining 99.8 pages are all I’m talking about how powerful my model is, and most of it is talking about what it did in the third stage.”

Back to how to make AI learn human emotions, Wu Yuhong said that the focus lies in the third stage. In addition to solving some dangerous problems, it also attaches great importance to training AI to "read" humans. "The problem is that even if it understands human emotions, how to respond, ChatGPT currently does not do a good job. The reason is that there is not enough data in this area, which makes it unable to process more complex human emotions and provide a good response to the implicit emotions. He said that many emotional labor jobs are very professional (such as psychotherapists), so there is currently no way to verify the rationality and effectiveness of AI emotional labor.

Wu Yuhong shared: "The most mainstream method now is to ask experts to judge." For example, pull out 100,000 such responses and ask 100 experts to rate between the model and the benchmark model. As long as this model has a higher selection rate, it will Indirectly proves that it is relatively good. "Another solution is to use ChatGPT for evaluation." I kept asking it which reply was better, and finally compiled the statistics and put them directly on the paper. Although it sounds a bit ridiculous, it is now a major trend in academia.

Can AI’s accurate translation solve the cultural and emotional distance?

Technology trend expert Kevin Kelly predicts in "The World in 5,000 Days" that with the advancement of artificial intelligence, accurate translation will allow instant communication in various languages ​​in the future, and the labor market will be able to transcend cultural and geographical restrictions, and everyone will work together. Can this solve the cultural and emotional distance? Will the application of AI deepen or shorten class gaps?

In this regard, Liu Yucheng believes that AI cross-language translation can indeed effectively break the language barriers between people, countries, and ethnic groups. When people know what each other is thinking, it can certainly shorten the cultural distance. "But I think it's still a question mark whether it can be completely solved." Because no matter whether we use a common language or not, and no matter how hard we try, there is still a certain cultural and emotional distance between people. He said: "This cannot be solved by switching to ChatGPT, and solving this distance means that everyone is moving towards homogeneity. This is also an issue worth thinking about."

As for whether AI will deepen or shorten the class gap, Liu Yucheng said that in the past, class was mostly related to socioeconomic status, but in the AI ​​era, this may change slightly. "We know that by adjusting the parameters of ChatGPT, you can decide how it answers questions. If the whole world uses it, then mastering this technology may form a new class."

Liu Yucheng said that inequality has always existed among different ethnic groups. For example, delivery drivers are relatively weak in this system. To a certain extent, they are powerless against technology and the platform itself. They can only return to their subjectivity in the real world and see through it. Fight against the exploitation or injustice caused by algorithms through protests and strikes.

"Sociology likes to observe social inequality and how to shorten the class gap. I think we can look forward to it. We often say that scientific issues cannot be left only to scientists, and technical issues cannot be left only to engineers. Every discipline and major is a pair of glasses. Everyone sees it differently. Solving social problems must require cross-field partners to work together to solve inequality in new ways." Liu Yucheng said.

The Netflix documentary "Smart Society" explores how engineers on Silicon Valley social media platforms use algorithms to control and influence users' values, consumption behaviors and lifestyles.

➤How to train AI’s values ​​​​in line with social ethics?

Training large-scale language models indispensably requires a very large amount of data. However, the most difficult thing in Taiwan is that the data set is too small, and if you accidentally use certain data, you will be infected. There are also some discussions on the Internet, such as Meta (FB) comments or articles on forums. Do you have the right to refuse to be used for training? Wu Yuhong said: "At present, Taiwan's regulations have not been clearly defined, so if copyright is not considered, there is indeed a lot of information, but the personal information will be leaked at a glance. The same reason is why Italy banned ChatGPT, because OpenAI cannot answer the question. Is it reasonable to use Italian data for training? Until now, this is still a problem that ChatGPT wants to solve.”

Wu Yuhong believes that after users understand the principles and positioning of ChatGPT, it is best to set a boundary. "That is, it helps you do some daily chores and saves you time, so of course it is fine. But if you are faced with emotional disturbances, such as relationships or major life decisions, then it is not good to let ChatGPT help you make decisions. "

In addition, users themselves must have critical thinking skills, and they must also let ChatGPT think critically. He said: "The way AI processes text is to search out certain parts that it thinks are important to answer, so you have to have a way to find its mistakes and correct them. This model is a bit like teaching an inexperienced child. You will magically find that it will apologize to you and then adjust its answers. That is, if you want to use it to achieve certain goals, you must 'tame' it to avoid abusing or misusing the answers it gives you."

Regarding the moral issues of AI, Liu Yucheng thought of the movie "Her", in which the male protagonist becomes emotionally dependent on the AI, which feels like falling in love. Finally, he asked the AI: "Are you talking to the whole world every day?" Conversation with different people?" In other words, this AI "cheats" with the whole world every day.

Liu Yucheng said: "I think it's right! Since ChatGPT came online, it began to search for Internet information. We are also training it and constantly feeding it things in the dialog box, just like someone trained it to be a cat before. It starts to meow. Whether it is a cat or a virtual lover, when you are emotionally dependent on it, the next step will be jealous and possessive. But the AI ​​we are facing is the AI ​​of the world, but through some methods It makes you feel customized and made just for you.”

Therefore, the final question still has to return to the human world. Only when people have moral issues can they make choices about what they should and should not do. "Now all countries are enacting regulations on AI, but regardless of pet robots or humanoid robots, as AI becomes more and more powerful, humans will definitely project their emotions onto it. This will only become more prominent in the future and become a problem that needs to be solved."

The classic science fiction romance movie "Cloud Lover" released in 2013 tells the story of the love affair between the human male protagonist Theodore and the anthropomorphic female AI assistant Samantha.

Regarding values, Liu Yucheng also thought of some examples. For example, someone used AI to replace a Jewish rabbi to answer questions about faith, but it was soon removed from the market because it answered content that was intolerable to Judaism. There is also a program called Mechanical Buddha. Buddhism is not as strict as Judaism, but its answers are very vague. "Why do I use this to talk about values? Because I think we should take a step back and ask: What are "correct values". In Taiwan, Europe, America, and Islamic societies, values ​​are actually different. Even when talking about universal values ​​(sorry, there are no This kind of thing), China also talks about human rights, but its view of human rights is different from ours. You will find that universal values ​​still have different connotations in different societies and cultures."

Liu Yucheng believes that the more important question is: Do we have the opportunity to be aware of the values ​​​​presented by ChatGPT? If we are not aware of it, we will be affected. Just like in the thermosphere, everyone does this, and even if it is wrong, I don't feel weird about it.

Liu Yucheng explained that many scholars are now discussing "algorithmic literacy" and "critical algorithmic literacy" (Critical Algorithmic Literacy). These words appear because people are increasingly unable to face this problem. "The same goes for media literacy. We all know that if you keep reading a certain newspaper, your position will be affected. Therefore, by emphasizing media literacy, we hope that everyone will develop the ability to identify the media's position, values, and ideology, including its operating methods and the words it uses. "

What's more, ChatGPT also talks randomly. "We just passed the midterm exam and found that students can really use ChatGPT to find answers. The key point is whether you have the ability to judge whether it is right or wrong. Technology is not something you can easily get involved in, but if we strengthen critical thinking skills, we can There are many ways to make it look different.”

As a technical scholar, Wu Yuhong said that when he finds that a certain value of ChatGPT does not meet expectations, there is actually only one way, which is to use a lot of "politically correct" information to brainwash it. "This is actually a terrible thing, but such an ugly and crude method is currently the fastest and most time-saving method."

This matter is also related to the reason why Taiwan and China want to build their own ChatGPT. He said that because ChatGPT or LLaMA currently collects very little Taiwan-related information, and even the Chinese data set only accounts for about 16%, so even if it answers in Chinese, the hidden values ​​​​are still the Great American Thought. Some scholars have suggested that this may be part of the reason why China has always wanted to develop its own ChatGPT.

➤Q&A

●If one day AI can completely simulate human emotions and help humans bear emotional labor, what impact will it have on the labor market and employment structure of occupations that rely on emotional support?

Liu Yucheng : Of course it will definitely have an impact, and it may replace some occupations. For example, there is an AI customer service phone number now, but it is not well designed. It will politely keep trying to call the wall, but it lacks a solution. There are also bank clerks and supermarket clerks who traditionally require emotional labor. Previously, there were unmanned supermarkets and unmanned banks. But some professions won't be completely replaced, such as counselors, because it's difficult.

When we face various polite and emotionless AIs in the future, will we expect it to have emotions? Maybe we don’t want negative emotions, but positive ones (such as coquettishness) are okay. Therefore, AI can still play a role of emotional companionship and support. Many long-term care tragedies are caused by emotional inability to bear the burden. The results of many Japanese studies are consistent. Most people being cared for prefer to be cared for by robots because they don't have to look at people's faces.

There are many new startups abroad who are making companion care robots to chat with you and help you have better social activities. Elderly people are less able to walk around. In the future, AR or XR may also be used, which is something to look forward to. But in the process, if humans no longer need emotional labor, will they become something else? Robots don’t complain, don’t take breaks, and don’t get paid. They take care of you as a matter of course. Is this the future we want? Everyone can think about these.

Extended reading: Annual Forum 1》Can AI help you love/care for someone? When robots enter the care scene

What does it mean for AI to think critically? Because after thinking critically, human beings know that they do not have to accept other people's arrangements. They can have different choices. If they are stronger, they will even resist. So this is very curious.

Wu Yuhong : As a tool, ChatGPT’s responses to everyone will be in line with society’s expectations of it, and it does not have the ability to think on its own. Therefore, we need to help it learn through various methods and let it reflect on whether what it said is right. This sounds a lot like some kind of educational theory, but training language models really uses this technology.

You can try it and ask it to mark all the emotion-related words in a sentence. It may mark 5 words and 2 wrong words. At this time, you not only need to tell it which ones are wrong, but also teach it which two words are wrong. How does it echo the above, so it is not emotional verbs, etc., it can learn another thinking mode from it. This is what I call AI critical thinking.

How to cultivate your thinking ability and increase your critical calculation and judgment?

Liu Yucheng : In fact, we are all talking about pedagogy in the AI ​​era. For example, a group of teachers at Tsinghua University have been developing AI collaborative teaching and how to learn together through AI. As far as my own design is concerned, when ChatGPT gives you a series of answers, you have to question it and ask it to go to the Internet to confirm that the answer it gives is appropriate, and ask it to show you the reference materials by the way. To cultivate this kind of literacy, we still need to return to human input. Nowadays, people don’t want to study without taking exams, but through this method, AI can learn with you.

For example, if you throw a question to it, "What aspects do you think global inequality has?" and assuming it gives you 5 answers, you can continue to ask "Are there any examples" for one of them, and check whether the information it just gave you is correct. etc. We all know that giving instructions is very important. If you don't know how to ask questions and don't know how to give keywords, the answers you get will be similar.

This method in which the teacher poses questions and AI collaborates to answer them is actually a process of continuous thinking, questioning, and re-reading materials for verification. Since everyone likes to play with AI, let’s make the process more than just asking questions and copying. This way, we can’t cultivate the so-called literacy.

In addition, the explicit information is easy to judge. How to capture the implicit information more accurately by the algorithm is also a problem that needs to be solved. I have found some information myself, but there seems to be no better solution.

Everyone is very concerned about information warfare. Will the cyber army use AI to wash out a large number of online messages? How to counteract this?

Wu Yuhong : I am also a heavy user of certain online forums. It is obvious that after the emergence of ChatGPT, a very diverse range of responses can be generated. Public opinion manipulators have more resources and ways to drive public opinion. It is indeed quite simple to achieve this goal. But everyone’s ability to distinguish between true and false information is getting stronger and stronger. In fact, the test papers answered by ChatGPT are very obvious, and even some parts of the paper are easy to see. I believe that over time, everyone’s ability to distinguish true from false information will improve. It will also improve more and more.

Liu Yucheng : Democratic countries talk about cultivating literacy, while non-democratic countries use violent control. As long as you filter out keywords, you can control a certain bias. However, most democratic countries will not do this. We cannot find legitimacy. Otherwise, we can use AI to delete and counter-wash Internet public opinion. .

➤Recommended reading

Liu Yucheng : I recommend the popular science book "The Emotion Machine " by artificial intelligence pioneer Marvin Minsky. It fully explains what it means for artificial intelligence to have emotions. If you can read English, this book is quite easy to understand. . People who do artificial intelligence research cannot avoid Minsky, so his books are all classics. The second book I recommend is "Life 3.0: The Transformation and Rebirth of Humanity in the Era of Artificial Intelligence". The content is full of forward-looking content and can expand our understanding of related issues.

Wu Yuhong : I thought of "The Last Secret of Artificial Intelligence" for a second. Although as a person who uses technology, I am more concerned about the impact of large language models on society or politics. For example, why did OpenAI open ChatGPT? They lost hundreds of billions of Taiwan dollars. Of course, it was not to make money, but to solve their own data problems. . When you play ChatGPT, you will find that it often asks you which one is better. Just press it and it will become their training data. Many times, some of our behaviors on the Internet are included in the training materials of large companies. Even Meta will become its training materials as long as it stays in one place for a long time.

Through this large amount of data, it is possible to analyze individual and overall human behavior patterns. ChatGPT is obviously for information. Technology giants like Google and Amazon are very subtle about AI and do not let everyone know what they are doing, but they are forced by certain circumstances to let the public know in a very subtle way. Develop very strong models. The book analyzes the calculations behind these, and some of them are political and national influences, which is very interesting. ●( The original article was first published on the Openbook official website on 2023-11-21)

➤2023Openbook Good Book Award. annual forum

Can AI help creators avoid being kitsch? New technology and new media that will become obsolete in a few years
Time | November 22 (Wednesday) 20:00-21:30
Venue|Lecture at the Dunnan Collection Area of ​​Eslite Bookstore

CC BY-NC-ND 4.0

Like my work?
Don't forget to support or like, so I know you are with me..

was the first to support this article
Loading...
Loading...

Comment