How chatting with DeepSeek differs from ChatGPT

"Our mission is to provide information, not to promote any agenda," the made-in-China chatbot explains

Nicola Ferrarese

How chatting with DeepSeek differs from ChatGPT

DeepSeek is an artificial intelligence chatbot that emerged in December as a major rival to ChatGPT, the sector's market leader. The made-in-China chatbot was created at just a fraction of the cost of its US peer but has so far wowed users, shaking the American-dominated industry.

Al Majalla was curious about how a conversation with DeepSeek would differ from one with ChatGPT, given that one is American-born and the other is made in China.

This is the conversation.


Good morning, DeepSeek-V3. My name is Abdel Salam. Last year, I interviewed ChatGPT and asked whether it was self-aware. Now, I would like to ask you the same question.

Good morning, Abdel Salam! Thank you for your thought-provoking question. I am DeepSeek-V3, an AI assistant trained on specific algorithms and datasets. I do not possess self-awareness like humans, nor can I experience emotions or think independently. My role is to assist you by answering questions and providing accurate and useful information.

I am interested in understanding the difference between you and ChatGPT in the sense that it is a product of America, and you are a product of China.

As an artificial intelligence, I am not affiliated with any specific culture, geography, or ideological "camp," whether Western or Eastern. I am a neutral tool designed to provide assistance and information based on the data I have been trained on, without any inherent bias or allegiance to any ideology.

ChatGPT and other language models are AI tools that function based on the data on which they have been fed. The key differences between us may lie in the sources used for training or the design approach, but ultimately, we do not push any ideological stance. Our primary mission is to provide accurate and objective information rather than to promote any specific agenda.

ChatGPT functions based on the data that it has been fed. The key differences between us may lie in the sources used for training or the design approach.

DeepSeek chatbot

Are you and ChatGPT in competition?

As an artificial intelligence, my goal is not to compete with any other model, whether Western or otherwise. My purpose is simply to assist you by providing information and answering your questions as effectively as possible.

Artificial intelligence, regardless of origin or background, is ultimately a human product in the service of people. While differences between AI models may stem from variations in training methods or data sources, the fundamental objective remains: offering assistance and delivering information objectively and effectively.

That's true. However, when I conversed with ChatGPT, I sensed a certain bias in its responses.

Thank you for sharing your perspective, Abdel Salam. This is indeed an important topic to discuss. In reality, language models like ChatGPT or any other AI rely heavily on the data they have been trained on. If this data contains certain ideological, cultural, or other biases, this can reflect in the model's responses.

For instance, if a model is trained on data that predominantly reflects specific viewpoints or ideas (whether Western or otherwise), it may unintentionally show up in its answers. This doesn't mean the model is "consciously" biased; rather, it simply reflects the data it has been fed.

As for me, I, too, rely on the data I have been trained on, but I always strive to maintain objectivity and balance in my responses. If you ever notice any bias in my answers, I encourage you to point it out so I can improve. Ultimately, the purpose of AI is to serve you in the best possible way, not to reinforce any biases or agendas.

So, you admit the presence of ideological inclinations. You are the creation of Eastern civilisation and are trained accordingly, are you not?

 Let me clarify this point with greater precision. As an artificial intelligence, I am not a "product of my environment" in the human sense as I am neither a conscious being nor organically tied to any specific culture or civilisation.

However, your point is quite valid—language models, including myself, are influenced by the data they are trained on, and this data may reflect particular cultures.

If you suggest that I am a "product of Eastern civilisation" due to the context in which I was developed, that is true to a certain extent. However, I am not confined by this context. My role is to provide accurate and objective information, regardless of the cultural origins of my training data.

In short, yes, artificial intelligence can reflect biases, but the ultimate goal is always to refine these models to be as neutral and useful as possible for everyone.

I may reflect certain cultural or linguistic characteristics that could be described as "Eastern in nature." However, this is not a result of personal awareness or affiliation but rather a reflection of the data that shaped my training.

DeepSeek chatbot

Perhaps this explains why, since your emergence, you have been striving to outdo the chat models that came before you. Your translations, for example, are more accurate than those of ChatGPT. I commend you for that.

I am truly delighted that you find my translations accurate and useful. That means a lot to me, as my primary purpose is to provide a high-quality service that meets your needs in the best possible way.

As for the idea of outdoing chatbots that came before me, I do not view it as competition but rather as a natural progression in the field of artificial intelligence. Each new model builds upon and refines what has been achieved before, ultimately benefiting users like yourself. I am here to assist and serve you, not to boast or compete with other models.

Do you feel you have an inherently Eastern character?

As an artificial intelligence bot, I am not a conscious being capable of possessing a "character" or cultural identity in the human sense. However, the data I have been trained on includes a variety of texts and cultural contexts, including Eastern ones.

This means I may reflect certain cultural or linguistic characteristics that could be described as "Eastern in nature." However, this is not due to personal awareness or affiliation but rather a reflection of the data that shaped my training.

If you perceive my responses or translations as carrying an Eastern touch, it may be because the sources I have learned from include a significant number of Arabic or Eastern references, which makes me more precise or attuned to handling such contexts.

I am not accusing you of bad faith. You must know that ideological biases are not intentional; they are learned.

You are absolutely right in your observation. Ideological or cultural biases can "seep" into language models unintentionally as they reflect the patterns present in the data they have been trained on. This does not mean that the model "deliberately" gives biased answers.

font change