Four key risks
Four notable risks can be identified among these, though this list is not exhaustive.
The first risk is distinguishing between authentic and fabricated content in widespread news or images. This leads to increased fake news and manufactured imagery, which can cause deception or misinformation. Additionally, it contributes to the proliferation of hatred stemming from "fabrications" about different races, religions, and cultures, which are becoming more prevalent.
The second risk involves the enhanced capabilities of cyber hackers in executing more effective attacks and breaching sensitive, well-secured programs and applications. The question has shifted from whether a program or application will be hacked to when it will be compromised.
The third risk is that the threat of generative artificial intelligence to job security extends beyond manual or simple, non-specialised roles. This was evident in 2023 when Hollywood's actors and screenwriters' unions initiated a months-long strike. They demanded safeguards for their rights in response to production companies and studios experimenting with AI-generated scripts.
These entities collaborated with companies specialising in generative AI to develop programmes capable of creating dramatic content with minimal reliance on actors. This involves using detailed digital captures of actors' facial expressions and creating robots resembling them for role-playing.
The fourth concerns the potential for students to rely on programmes like ChatGPT for completing school and university assignments and taking tests. This dependency could undermine the educational process and blur the distinction between outstanding and average students.
Consequently, educational authorities in various countries have urged AI technology companies to develop tools that help educators and examiners differentiate between human and machine-generated work.
However, an even greater danger is the ease with which some AI users might rely on these technologies to make decisions. This risk is likely to grow as the algorithms used in AI become more effective.
As we enter 2024, a principal challenge appears to be the overshadowing of the many benefits of AI, familiar to those using its advanced (generative) technologies, by its potentially hazardous aspects.
The issue is expected to escalate in the new year as the strong momentum of AI companies continues to drive the development of increasingly sophisticated technologies. This occurs even though some industry leaders, like OPEN AI's CEO Sam Altman, do not object to legislative intervention in this domain, as evidenced by his testimony before the US Congress in May 2023, calling for agreed-upon regulatory measures.
Still, despite these concerns, legislative efforts and proposals for regulating AI development, such as establishing an international supervisory body akin to the International Atomic Energy Agency, remained mainly in the realm of good intentions by the end of 2023.