-1.4 C
New York
Saturday, December 21, 2024

“In the future we will be unable to know if a book was written by a person or AI, and we will not care”

I console myself with the normal excuse: if it hadn’t been me, someone else would have done it.“. They are words that Geoffrey Hintonone of the fathers of artificial intelligence (AI), spoke in May 2023 when he decided to leave Google to be able to freely warn about the risks involved in this technology. “We will not know what is true and what is not,” he warned.

For Pepe Cerezoexpert in digital business strategy and development, founder of Digital Journey and partner of Programmatic Spain, the thesis is alarmist. In any case, Cerezo does not ignore the dangers of AI. What’s more, in this new professional stage that opens in his life Its objective is none other than to “raise awareness” about the importance of its proper use and “train” those who, inexorably, will be “forced” to work with it.

Artificial intelligence already poses challenges in the field of communication and information, ethical, educational and social challenges that are essential to address. Cerezo’s maxim may seem adventurous, but he expresses it assertively: “In the future we will be unable to identify whether a chronicle or a book has been written by a person or an AI, and Besides, I think we won’t care.“.

This book seems like the foundation of what Digital Journey is intended to represent. What is the objective of the project and what steps are you going to take?Digital Journey was born as a think tank. I take it to the field of digital transformation in a broad way, and now with a very important focus on the fact that artificial intelligence is changing society, the economy, the way we relate, business and the world of information. …I think that this space for debate and reflection has to be as transversal as possible. That is why I want to focus it on several areas of action and thought: one is, obviously, information and communication; There is another that has to do with rural Spain, emptied Spain, the territory; another with the young people; and also with the cultural and organizational field. The first thing we created is this book, which is, rather, a collection of books. The first is about generative AI and its impact on the world of information and communication. The next one will be focused on the world of education because artificial intelligence is changing it. In parallel we are beginning to organize debates, talks, etc. And I am preparing a workshop on disinformation so that journalists and teachers themselves can be trained to detect and combat it.

David Sanz says in his chapter that he believes that cases in which ChatGPT is used as a personal psychologist or confidant are exceptional. However, the study Perceptions about the impact of digital content on children and adolescentscarried out by GAD3 and the SOL Foundation in September, says that a third of Spanish teenagers trust it for advice on their social relationships. Isn’t this information alarming?All data must be interpreted with some caution because we can find studies that say one thing and others that support the opposite. I think we have to be very aware of how artificial intelligence works. Obviously it is alarming, just as our excessive use of social networks or mobile phones is alarming, right? But the issue is not so much the number or the data, but how we relate and how we are able to identify those risks. The risks are in each of the technological actions that we promote. We have to know and be very aware that AI cannot supplant certain roles that doctors, psychologists or journalists can play.

“AI cannot supplant certain roles that doctors, psychologists or journalists can play.”

Cerezo, at another point in the interview.
Cerezo, at another point in the interview.
SERGIO GARCÍA CARRASCO

Is it really feasible to provide artificial intelligence with a moral sense?AI is based on algorithms, and algorithms are not aseptic because they are based on data. They also have certain biases depending on how they are programmed, what data they acquire, how they carry out that learning… That is why knowledge and also regulation are very important. Technology can be programmed in one way to use it in one way or another. In the same way that AI can have an orientation focused on promoting high-quality content, it is also used for the opposite: misinforming. I insist: that is why I believe that control, transparency and regulation are very important.

We know that the good use of AI requires a collective effort, but experience tells us that any technological advance in the wrong hands can be harmful, and there are not and will not be enough bad hands. In a world of good declarations of intent but hidden actions, how is an autocrat to be prevented from using AI for his own benefit?I reiterate that there must be knowledge and awareness of the capabilities of these new technologies and their impact. That is why this project (Digital Journey) was born. Europe has been a pioneer in creating a regulatory and transparency framework for AI. If the regulation is done in a community way, in a democratic way, it is easier. But it happens like the rest of technologies: in the hands of a dictator, the media and television are also tools. And now we are seeing how disinformation is coming from certain countries, right? In the case of Europe, Russia right now is the biggest destabilizer in terms of disinformation.

So it is inevitable that it will continue to happen.Yes, it is happening and it will continue to happen. What we have to learn is to control it and not extend it to everything. If we create a framework of shared knowledge of risks, it will be easier to find solutions. Risk is always inevitable in life. What we have to do is minimize it and control it.

One of the issues that those who want to use AI for their own benefit take advantage of is the lack of media literacy. Today, each educational center is given the freedom to, according to its criteria, include or not include workshops on how to identify and combat misinformation. The EU proposed in 2008 to create the subject of “Media Education” in this sense. Is enough attention being paid to this phenomenon?I think not, with very few exceptions. It happens even in the journalistic world: there is little training within the universities themselves or the media to know how it is affecting and how to combat disinformation. Now is when all the alarms have gone off when we see that this can bring down governments, influence elections or cause hoaxes to spread like those we have seen with the DANA tragedy. We have to get to it. That is why regulation and training are very important. I don’t know if it is a subject in schools, but sooner or later we will have to take concrete measures that even transcend the educational field and involve business organizations, politicians, brands or advertising.

Politicians cannot say what is a hoax and what is not.

Given the distrust that exists towards the political class, any initiative promoted in this sense today can be interpreted as a form of indoctrination. How should it be done? Does a politician have the power to say what is a hoax and what is not?Of course, it is very difficult, because many politicians are also contributing to the existence and spread of misinformation. That is why I believe that artificial intelligence goes beyond technology: it is reconfiguring the model of society. In this way, we must provide society with critical thinking. Politicians will be the ones who argue and regulate what can be considered a hoax, but they cannot say what is a hoax or not. You have to have independent business models. Competencies and balances are very important, that is the basis of democracy.

Are we at a time when the potential dangers of artificial intelligence are highlighted more than its benefits?It always happens. If we go back to the early years of the Internet, almost all the news was very negative. Right now there is a dispute between those who believe that artificial intelligence is going to take us to a utopian and perfect world, which is not true, and those who see it as a danger. Artificial intelligence has magnificent components, and when you work with it and are aware of its possibilities it is magnificent. On the other hand, we must pay close attention to the risks, but not focus only on them because we are going to cause many people to stay away from it.

Cerezo poses for '20minutes'.
Cerezo poses for ’20minutes’.
SERGIO GARCÍA CARRASCO

How can artificial intelligence help a journalist, a writer or an illustrator, professionals who need creativity for their jobs?The relevant thing about this technology is that it transforms the value chain: creation, distribution, monetization… so we are creating new industries. How does it affect the journalist? Well, in your daily life: it helps you optimize processes, it makes it easier for you to interpret data, search for information… What will become clear is that the journalist will not be able to remain oblivious to this phenomenon and say: “No, “It doesn’t affect me and I’m not going to use AI.” You are going to use it because it is being used in your environment. How to do it in the best way? We return to the same thing: forming the teams. And good governance is very important here: being transparent with the reader, specifying where the data came from, the model that has been established, establishing fair compensation for the author… Because it is the authors, the people, who have absorbed that data so that the AI ​​can train.

Let’s pose a scenario: a company incorporates AI into its newspaper editorial systems and hires a journalist. One of the tasks entrusted to the journalist is to work with this AI to train it so that it ends up adopting the style, the vocabulary, even the fillers that he usually uses in writing his articles…Yes, it is already being done.

…The company sells it as something positive for the worker because, once trained, they will be able to automate the most tedious work and focus on more elaborate journalistic pieces. The degree of development is such that the journalist could ask the AI ​​to write a chronicle of the last control session with the Government and the result would be practically identical to what if he had written it. When it goes to ask for a salary increase, the company knows that it already has a trained AI that can do that job for a minimum expense and uses it so that the journalist is satisfied with charging what they charge. How do we prevent something like this from happening?We have to regulate it.

Yes, but very similar things are already happening, for example, with the voices of the dubbing actors.Yes, that’s right, and the Hollywood writers’ strike was also related to this. That is why regulation through collective agreements is essential, and I have the feeling, I could be wrong, that, for example, unions are not paying attention to that. In the United States we are already beginning to see some large newspapers involved in internal lawsuits because unions have come together to denounce that, thanks to AI, the company is receiving money for content in which a person has participated. The compensation model must be rethought and, without a doubt, there will be conflicts.

David González presents in his chapter an interesting scenario for the coming years. Is it an exaggeration to say that in the future, under the journalist’s report or on the first page of a book, an asterisk will be included clearly specifying that the author is, 100%, a person?Right now we are doing the opposite: we must indicate that a text is made with AI. Honestly, I think it will be ignored. In any case, we will be unable to identify whether it was written by a person or an AI, and I believe that we will not care, as long as it refers to the missions and values ​​of the organization. That is, if the report is good, if the article answers what I demand as a reader or if the journalist who has helped develop AI is rewarded.

Source link

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe

Latest Articles