AI

June 2023 AI monthly

Having already penned three articles on the subject of AI (1, 2, 3), I am now moving forward with a news digest from the world of artificial intelligence.

Generative AI in the market

AI is receiving increasing amounts of hype. In 2023, startups claiming to use 'AI' raised more than $20 billion.

Crypto was once a hype, and many companies created Ponzi schemes as a result. A market heated by hype eventually cools, as is happening now, with Reuters reporting a string of bankruptcies among crypto firms. Ponzi schemes extract money from individuals, and when these crypto firms go bankrupt, their clients often receive nothing. Unfortunately, ethical behavior is rarely observed among those creating such schemes.

Scientists are already raising alarm over the potential risks that large language models can present. However, is it reasonable to expect that the owners of AI firms will uphold ethical standards in the face of such a financial windfall?

Generative AI in games

NVIDIA recently showcased an integration of generative AI into the logic of non-player characters (NPCs). You can see it in action in this video:

With this advancement, players can now verbally communicate with NPCs, receiving responses nearly indistinguishable from human interaction. The experience is immersive: an NPC listens to you and gently guides you along the game narrative. Imagine a pet in a role-playing game (RPG) that accompanies you throughout the story, battling alongside you and offering helpful advice. This pet, powered by a large language model (LLM), converses so realistically that you might momentarily forget it isn't human. Enhancing the gaming experience even further, the system rewards you for achieving objectives you set for yourself, informed by your interactions with the LLM. Over time, you grow attached to this pet—it becomes more than a companion and starts to wield greater influence over you.

Gaming is widely recognized for inducing dopamine release, which can lead to addiction in some individuals. There's an ongoing debate among scientists regarding the impact of gaming. Some highlight the negative effects of gaming addiction, while others argue that gaming can foster the development of certain skills. Regardless of their viewpoint, all agree that the dopamine release triggered by gaming significantly affects us as humans.

The influence of games on political views and marketing is still not fully understood. We're currently witnessing the impact of relatively simple algorithms, such as Google's search recommendations, on elections by influencing people's political views. I am intrigued by the potential effects that conversations with NPCs, intertwined with game atmospheres and narratives, could have on shaping people's minds. There have already been instances of individuals committing suicide after text chats with LLMs. Moreover, studies have shown that simple text interactions with AI can alter political opinions. But how might this dynamic change with more immersive gaming interactions? Our beliefs are formed and modified through dialogues with others, which makes it challenging to predict the potential effects of LLM persuasion combined with the dopamine reinforcement provided by gaming.

For further insights and concerns regarding this topic, please refer to the following material.

Use of LLMs in court

Despite numerous warnings about potential inaccuracies or "hallucinations," people continue to use LLMs for their convenience and "efficiency". This was precisely the case with Steven A. Schwartz of Levidow, a lawyer in the US, who utilized the ChatGPT model to create a legal filing. Unfortunately, ChatGPT "hallucinated," or produced erroneous information, when composing references. Such hallucinations are well-documented characteristics of these models, and in many ways, are a feature rather than a bug.

The presiding judge was able to identify the error, and the lawyer has been summoned to court on the 8th of June. It remains to be seen whether he will manage to retain his license.

This raises an important question: What kind of regulations should be put in place to protect society from the potential harm caused by these artificial intelligence tools?

Please refer to another study, which discusses both the risks and potential regulatory measures for the use of LLMs.

AI and deepfakes

The capacity to generate deepfake videos is no longer a capability exclusive to governments and corporations. With the aid of freely available open-source tools, almost anyone can create these videos using home computers.

Europol has voiced concerns about this, as deepfakes can be used maliciously for propaganda, fraud, evidence tampering, blackmail, and non-consensual pornography.

The same applies to speech generation. With the advent of 'voice cloning' tools like ElevenLabs, it's becoming increasingly straightforward to replicate someone's voice and use text-to-speech AI to sound exactly like the person being cloned. This poses a significant threat to all biometrics based on voice, even posing a risk to secure systems like banks.

Looking forward, it's reasonable to anticipate the emergence of comprehensive audio and video real-time deepfaking solutions on the market, potentially enabling live cloning of individuals.

Prompt engineering

Prompt engineering, an emerging field within artificial intelligence and natural language processing, revolves around embedding task descriptions directly into input, often in the form of a question, instead of providing them explicitly. The primary objective of this discipline is to optimize prompts, enhancing the effectiveness of language models across a diverse array of applications and research topics.

There are numerous guides and video tutorials available for those interested in prompt engineering. Two resources that I highly recommend are:

Call for discussion

A group of distinguished scientists and entrepreneurs, more concerned than I about the rapid progress of AI, have initiated an open call for discussions concerning AI risks. You can find their call to action here: Statement on AI Risk.

They argue that the artificial intelligence technologies being developed today could potentially pose an existential threat to humanity in the future. They propose that this threat should be recognized as a societal risk, on par with pandemics and nuclear wars.

References:

You've successfully subscribed to Qase Blog
Great! Next, complete checkout to get full access to all premium content.
Error! Could not sign up. invalid link.
Welcome back! You've successfully signed in.
Error! Could not sign in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.