The truth about ChatGPT: what you need to know
Artificial Intelligence (AI) have revolutionized the way businesses communicate with their clients. ChatGPT, an AI chatbot developed by OpenAI, is one of the most popular chatbots in use today. However, it is essential to understand its working, limitations and pitfalls before using it. In a recent Inman News article Bernice Ross interviews with Jay Swartz, Chief Scientist for Likely.AI, it was revealed that AI chatbots such as ChatGPT are based on Large Language Models (LLM). While they have been trained on billions of datasets, there are still trillions and trillions of combinations of language, which can lead to incorrect responses.
Swartz points out the probability problem with LLM chatbots. As they are driven by probability, they can make incorrect decisions based on the probability of an answer. For example, if they encounter a question that they are unable to answer, they may provide an incorrect response. Therefore, it is important to ensure that the AI is trained to answer the type of questions that are being asked.
Another major problem with AI chatbots is anthropomorphism, the attribution of human characteristics to non-human objects. Many users tend to treat ChatGPT as if it were a human, which can lead to dissatisfaction when it is unable to provide the right response. Swartz calls the AI hallucinations a major challenge in the machine learning space. These hallucinations are very much like human hallucinations, where the AI ‘dreamt it up out of nowhere.’
While chatbots powered by AI have been revolutionary, it is important to understand their limitations and how to use them appropriately. Business owners must be vigilant about monitoring the chatbot to ensure that it is providing the right responses to customers. Read the full article on Inman News or on Bernice Ross’s Authory to learn more about the risks and pitfalls of using ChatGPT and how to avoid them.