top of page
Search
  • Musa

The Future of AI #chatGPT

Updated: Feb 16, 2023

As you may know the world of Artificial Intelligence (AI) has recently had some interesting development to say the least. This new technology has opened up the possibility for a future of unimaginable innovation, but at the same time has caused some concerns. One company currently in the spot-light is OpenAI (the creators of chatGPT).


AI language models (chatbots) are one of the most fascinating pieces of software we currently have access to. These language models are able to generate a human-like response to questions asked of them. Other AI software can generate creative and realistic images based on word prompts such as "DALL·E" by openAI.




How Does A Chatbot Work?


The following question may arise: how are chatbots different to Google? Well a Google search will simply find the most relevant response a human being has already written to whatever is searched. An AI chatbot essentially creates intellectual property* by using the vast amount of information on the Internet to create an understandable and accurate response. This can seem uncanny to some as it is can be almost indistinguishable from human work.


What are the implications?

Well for one, who does the generated text belong to? Almost anyone can get access to chatGPT and use it as they see fit, and with chatGPT constantly learning from its users, whether it can reasonably be said that the generated text belongs to the AI itself is debatable. Also, can its responses be passed-of as human work? This is a very heated discussion and it has even been discussed in many senior government institutions around the world. Some states in the US have banned access to such software due to fears of cheating (for example in schools).


The Current Dilemma


How can we prevent misuse while still allowing people the freedom to exercise their curiosity and use AI chatbots in a useful way? OpenAI has recently released a report titled "Forecasting Potential Misusesof Language Models for Disinformation Campaigns — and How to Reduce Risk". In this report they detail how AI could affect important operations. This could include work such as content creation, essays, etc. The report outlines a prevention framework which consists of four stages to prevent AI misuse. There is one stage that is being heavily focused on at the moment - the third stage which is "content dissemination."


There have been a number of discussions on if it is possible to tell apart human written text from text written by AI. Currently, there exist a number of "classifier" software to detect AI text. These have been created by software companies including OpenAI, as well as by others such as "GPTzeroX".


Although this could prove to be a viable solution, those creating this type of software have been met by a few hurdles such as those stated in the reports. These classifiers are not accurate on text less than 1000 characters, and some have a reported accuracy of only 26%.



The Future of AI


AI has been a heavily researched and studied area of Computer Science for quite some time now. There have been many chatbots and language models in the past, each with their own failures and controversy. In the past some chatbots have been allowed to learn human behaviours. Unfortunately this backfired when an AI began producing toxic, unreliable and harmful content. Although newer models like chatGPT have been trained to actively reject negative prompts, there are still issues which AI developers must tackle before their software becomes widely accepted and feasible to use in our daily lives.

46 views0 comments

Comments


bottom of page