AI is a wolf in sheep's clothing, says expert

Dr Maha Bali's ideas on artificial intelligence helped coin a key term, now the name of a new journal. She tells Al Majalla why she doesn't use ChatGPT for fun and why AI needs more Arab experts.

In an interview with Al Majalla, Dr. Maha Bali says she surrenders to the idea that artificial intelligence cannot be stopped but stresses the importance of shaping it to advance our values — not destroy them.
Axel Rangel Garcia
In an interview with Al Majalla, Dr. Maha Bali says she surrenders to the idea that artificial intelligence cannot be stopped but stresses the importance of shaping it to advance our values — not destroy them.

AI is a wolf in sheep's clothing, says expert

Critical AI is a new phrase circulating among academics around the world face as they face the challenges of cutting-edge, generative artificial intelligence technology.

It will even be the name of a new interdisciplinary journal, as the rise of AI has far-reaching implications that will go way beyond the classroom and the campus. The Critical AI publication is based at Rutgers University’s Center for Cultural Analysis, affiliated with the Rutgers Center for Cognitive Science, and published with Duke University Press.

The reach of AI is broad, moving significantly beyond its origins in computer science. It was a professor of English at Kansas University who recently summed up the importance of the technology in direct and simple terms. Kathryn Conrad said: “A robust critical AI literacy is essential for everyone—with emphasis on 'critical’.”

She added: “These generative technologies are having an impact on the world, and the ethical challenges they entail—such as the exploitation of labour from the global south and the potential reinforcing of a western/global north perspective on the world, due to the kinds of data that have been scraped to train the models—will also have an impact on the world.”

Conrad credits the “Critical AI” phrase to Dr. Maha Bali of the American University in Cairo, who coined it. Bali is a leader in the study of ed-tech and has been making keynote speeches on open education, digital pedagogy, and social justice since 2017.

One of the most prominent commentators from the Arab world on AI, she has referred to the technology as a “wolf in sheep’s clothing”, not least due to the way in which one of the most famous pioneering technology platforms in the field was put together.

Dr. Maha Bali is one of the most prominent commentators from the Arab world on AI. She has referred to the technology as a "wolf in sheep's clothing", not least due to the way in which one of the most famous pioneering technology platforms in the field was put together.

Dark beginnings

In an interview with Al Majalla, Bali said that the way in which Chat GPT was designed to be "ethical" was itself problematic. The process depended on human interaction with questionable and upsetting material it was designed to reject.

She pointed to a Time magazine investigation in January that revealed how the generative AI interface's programming to avoid violence, swearing and offensive content meant workers "looking through a lot of very harmful text and images". 

The sub-contracted staff were in Kenya. "Those people were underpaid and also suffered a lot of mental health issues because of the work they were doing to make Chat GPT a more ethical AI," Bali said.

The impact of the work on those who carried it out left Bali "disgusted" and meant she "stopped using AI for fun", leaving her reluctant to embrace the technology.  

"I only use it if I am giving a workshop or really need to test something," she said.

Disparity over languages

There are also disparities over the languages the tool itself can use, Bali says.

"ChatGPT has certainly not been trained on enough Arabic material or material originating from our part of the world. So it distorts or imagines a lot of the history and politics of our region."

This also creates problems with quality: "Its written Arabic is mostly grammatically OK, but it sounds translated, so it is not the most fluent. However, the possibility of training it to do better in and on Arabic is possible."

"It's just a question of whether we want to make it a priority and develop our own and find the datasets to train it well. That may require us to move beyond the internet because quality Arabic internet material may not be enough, even though Arabic is one of the more popular languages online."

Bali's central theme for the technology also applies to its use and potential use in the Arab world.

"There needs to be more development of critical AI literacy so people are able to use the AI in the right way and know when why and where to use it and when not to, and to recognise its biases and how problematic and harmful it can be," she says.

Arab AI for the Arab world

"I would love to see more AI applications originating from our part of the world for our purposes. I know there are Arabs on OpenAI's team."

They are not the only ones working well in the area. Bali points to others who have promise.  "I know there is Rana El Kaliouby who has been leading affective AI – her company is called Affectiva — a spin-off from MIT.

"I also really admire the work of Nagla Rizk at my institution and A2K4D. They are managing funding for FeministAI and they have been working on a project called OpenAIR for some time now, all around AI for good and such."

But before would-be AI entrepreneurs can reach the private sector or the cutting-edge of the technology in academia, the field can be challenging and has even faced hostility.

 "One of the confusing things for students is the disparity in responses from professors. Some professors are banning it, and I think there is no point in that since you can't even detect AI very accurately."

ChatGPT has certainly not been trained on enough Arabic material or material originating from our part of the world. So it distorts or imagines a lot of the history and politics of our region.

Dr. Maha Bali, AI expert

"For me personally, I did not want to ban AI from students, but I wanted them to be transparent about using it. But what I am thinking a lot about these days is citations."

"For example, I started to realise that if you want to ask AI about something and you ask the results it will give you a synthesis of the data set it's been trained on, probably the internet."

"If the student cites they got it from AI, that is not actually where they got the ideas from. We have been trying to teach students for a long time that referencing things is important, but the emphasis has been on using copy and paste without paraphrasing. Yet we still need to cite paraphrasing."

Mind-changing

Bali's ideas on the promise held by AI were shaped in part by taking part in a paper on the potential uses of the tech. Contributors to the research in the Asian Journal of Distance Education were asked to speculate on how it might develop, along both negative and positive lines.

"I had a mostly negative view of AI because of all the ways it had been perpetuating harm and inequality in other contexts. My undergraduate degree was in computer science and my thesis used neural networks/machine learning, so I understand how it works," Bali said.

Taking part in the article "forced me to try to imagine a good AI," she added.

"It was useful, because if we're not going to be able to stop AI, then we should at least be able to be intentional about shaping its future to advance our values and not destroy them. Doing nothing also doesn't seem like an option."

font change

Related Articles