Having already penned three articles on the subject of AI (1, 2, 3), I am now moving forward with a news digest from the world of artificial intelligence.
Generative AI in the market
AI is receiving increasing amounts of hype. In 2023, startups claiming to use 'AI' raised more than $20 billion.
Crypto was once a hype, and many companies created Ponzi schemes as a result. A market heated by hype eventually cools, as is happening now, with Reuters reporting a string of bankruptcies among crypto firms. Ponzi schemes extract money from individuals, and when these crypto firms go bankrupt, their clients often receive nothing. Unfortunately, ethical behavior is rarely observed among those creating such schemes.
Scientists are already raising alarm over the potential risks that large language models can present. However, is it reasonable to expect that the owners of AI firms will uphold ethical standards in the face of such a financial windfall?
Generative AI in games
NVIDIA recently showcased an integration of generative AI into the logic of non-player characters (NPCs). You can see it in action in this video:
With this advancement, players can now verbally communicate with NPCs, receiving responses nearly indistinguishable from human interaction. The experience is immersive: an NPC listens to you and gently guides you along the game narrative. Imagine a pet in a role-playing game (RPG) that accompanies you throughout the story, battling alongside you and offering helpful advice. This pet, powered by a large language model (LLM), converses so realistically that you might momentarily forget it isn't human. Enhancing the gaming experience even further, the system rewards you for achieving objectives you set for yourself, informed by your interactions with the LLM. Over time, you grow attached to this pet—it becomes more than a companion and starts to wield greater influence over you.
Gaming is widely recognized for inducing dopamine release, which can lead to addiction in some individuals. There's an ongoing debate among scientists regarding the impact of gaming. Some highlight the negative effects of gaming addiction, while others argue that gaming can foster the development of certain skills. Regardless of their viewpoint, all agree that the dopamine release triggered by gaming significantly affects us as humans.
The influence of games on political views and marketing is still not fully understood. We're currently witnessing the impact of relatively simple algorithms, such as Google's search recommendations, on elections by influencing people's political views. I am intrigued by the potential effects that conversations with NPCs, intertwined with game atmospheres and narratives, could have on shaping people's minds. There have already been instances of individuals committing suicide after text chats with LLMs. Moreover, studies have shown that simple text interactions with AI can alter political opinions. But how might this dynamic change with more immersive gaming interactions? Our beliefs are formed and modified through dialogues with others, which makes it challenging to predict the potential effects of LLM persuasion combined with the dopamine reinforcement provided by gaming.
For further insights and concerns regarding this topic, please refer to the following material.
Use of LLMs in court
Despite numerous warnings about potential inaccuracies or "hallucinations," people continue to use LLMs for their convenience and "efficiency". This was precisely the case with Steven A. Schwartz of Levidow, a lawyer in the US, who utilized the ChatGPT model to create a legal filing. Unfortunately, ChatGPT "hallucinated," or produced erroneous information, when composing references. Such hallucinations are well-documented characteristics of these models, and in many ways, are a feature rather than a bug.
The presiding judge was able to identify the error, and the lawyer has been summoned to court on the 8th of June. It remains to be seen whether he will manage to retain his license.
This raises an important question: What kind of regulations should be put in place to protect society from the potential harm caused by these artificial intelligence tools?
Please refer to another study, which discusses both the risks and potential regulatory measures for the use of LLMs.
AI and deepfakes
The capacity to generate deepfake videos is no longer a capability exclusive to governments and corporations. With the aid of freely available open-source tools, almost anyone can create these videos using home computers.
Europol has voiced concerns about this, as deepfakes can be used maliciously for propaganda, fraud, evidence tampering, blackmail, and non-consensual pornography.
The same applies to speech generation. With the advent of 'voice cloning' tools like ElevenLabs, it's becoming increasingly straightforward to replicate someone's voice and use text-to-speech AI to sound exactly like the person being cloned. This poses a significant threat to all biometrics based on voice, even posing a risk to secure systems like banks.
Looking forward, it's reasonable to anticipate the emergence of comprehensive audio and video real-time deepfaking solutions on the market, potentially enabling live cloning of individuals.
Prompt engineering
Prompt engineering, an emerging field within artificial intelligence and natural language processing, revolves around embedding task descriptions directly into input, often in the form of a question, instead of providing them explicitly. The primary objective of this discipline is to optimize prompts, enhancing the effectiveness of language models across a diverse array of applications and research topics.
There are numerous guides and video tutorials available for those interested in prompt engineering. Two resources that I highly recommend are:
- SnackPrompt, which offers an extensive collection of prompts.
- Prompt Engineering Guide, which provides a comprehensive guide on crafting efficient prompts.
Call for discussion
A group of distinguished scientists and entrepreneurs, more concerned than I about the rapid progress of AI, have initiated an open call for discussions concerning AI risks. You can find their call to action here: Statement on AI Risk.
They argue that the artificial intelligence technologies being developed today could potentially pose an existential threat to humanity in the future. They propose that this threat should be recognized as a societal risk, on par with pandemics and nuclear wars.
References:
- Rounds Raised By Startups Using AI In 2023
- Investor Alert: Ponzi schemes Using virtual Currencies
- Factbox: Crypto's string of bankruptcies
- Ethical and social risks of harm from Language Models
- You can now talk to video game NPCs, and frankly it's incredible
- Watch this Nvidia demo and imagine actually speaking to AI game characters
- NVIDIA's generative AI lets gamers converse with NPCs
- Evidence for striatal dopamine release during a video game
- Are video games, screens another addiction?
- Your Brain on Video Games- The Neural bases of Gaming
- How Do Video Games Affect Brain Development in Children and Teens?
- ChatGPT monetisation and ads / propaganda
- AI effect on quality, or the epistemological crisis
- What Is ChatGPT Doing … and Why Does It Work?
- Model evaluation for extreme risks
- Ethical and social risks of harm from Language Models
- LLMs confabulate not hallucinate
- How Google manipulates search to favor liberals and tip elections
- 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says
- AI’s Powers of Political Persuasion
- A Lawyer's Filing "Is Replete with Citations to Non-Existent Cases"—Thanks, ChatGPT?
- https://github.com/s0md3v/roop
- Facing reality? Law enforcement and the challenge of deepfakes
- ElevenLabs
- How I Broke Into a Bank Account With an AI-Generated Voice
- Prompt Engineering guide
- SnackPrompt