
GPT-3 For Chatbot Development
GPT-3 For Chatbot Development
1. Introduction
Developed by OpenAI, GPT-3 is a machine learning model that specializes in language processing. Given an input text, GPT-3 outputs the new text. Depending on the input and the exact flavor of GPT-3 used, the output text will correspond to a task such as answering a question posed by the input, completing the input with additional data, or making the input a translating from one language to another, or summarizing the input, or inferring sentiment, or even crazier things like writing a piece of computer code from the clues given in the input. Among many other applications!
Trained with billions of parameters and loads of text corpora, GPT-3 is one of the largest models available out there. But among all language processing models, I find GPT-3 particularly attractive because of these features:
- It runs online, so I don’t need to download anything to the computer or server where I want to use it. I can use it by simply adding calls to the GPT-3 API in my source code.
- The above includes the possibility to write web code that will call the GPT-3 API, so you can incorporate the power of this tool into your web pages. Here we will see how to achieve this with simple PHP and JavaScript.
- Despite not having a grip on executables (because it all runs online), GPT-3 is very customizable. It allows you to program chatbots that behave in a way that you can tune, or even more interestingly, that “know” about a particular topic that you teach them. We’ll look here at one of two possible ways to train your GPT-3 models ad hoc for your purposes.
- GPT-3 is very easy to work with, for example in Python or as shown here in JavaScript and PHP.
I recently tested the capabilities and capabilities of GPT-3 in supporting science education and research by acting as an always-available bot that can answer questions from students or researchers. Part of these tests involves teaching the GPT-3 some pieces of data, which one can ask questions about.
The results were impressive, despite many limitations, mainly because the model does not necessarily understand what it reads and writes… it is simply a statistical model that synthesizes text that is grammatically correct but factually incorrect. May or may not be true. You can learn more about the tests I’ve done in these recent articles.
2. Awesome chatbots allowed by GPT-3
In this article, we will see how to create a simple chatbot that knows about a specific topic that you provide instantly. Users can naturally chat with the bot through a web page, asking whatever they want. When questions refer to a topic you’ve informed the chatbot about, it will respond based on that content.
The best part is that you don’t need to ask questions in a very systematic way, as you would need a chatbot to match the answer to a regular question to understand you. Rather, you can retrieve information naturally. Also, the answers depend on the context. So for example you can talk about a person by name and then refer to them with related articles.
For example, let’s compare the outputs of a regular GPT-3 chatbot when asked about me with a brief bio about me and some of my projects with the GPT-3 chatbot. is done The former will make things up, or at best just not provide an answer, while the latter will be more accurate in his answers, at least when you ask him about the information I’ve given him.
Let me first show you this short conversation with a custom-trained GPT-3 chatbot. I achieve this with what the folks at OpenAI call “Few Shot Learning”. It consists of preceding the prompt questions (to be sent to the GPT-3 API) with a block of text containing the relevant information. It’s not very efficient because it’s limited by the amount of information you can pass, and also because it uses a lot of tokens every time you send a prompt. But it is very simple and practical.
You see my questions in bold; The rest are chatbot responses and I write a few lines with comments in between:
Visitor: Who is Luciano Abita?
Assistant: Luciano Abriata is a biotechnologist, doctor in chemistry, artist, and content creator.
Guest: What has he made?
Assistant: Luciano has created a website called moleculeARweb, a website for education and outreach in chemistry and structural biology through augmented reality (AR) content.
The information is accurate, and although it is provided in the training paragraph, it does not textually match any part of it. GPT-3 may have reprocessed it, or it may already know about it since our website has been online since 2020.

3. Adding GPT-3 to your site with simple PHP and JavaScript
You need a few things first: a server that allows PHP and JavaScript, an API key, and some PHP libraries that you’ll link to the JavaScript code.
i) A server that can run PHP and allows JavaScript code.
The easiest solution is to use a hosting service that natively provides the PHP runtime and allows JavaScript. I use Altervista, whose basic free package already allows both! You don’t even need to install PHP!
But you need to activate it (it’s a free feature until March 2022). I’m using PHP version 8 and had to enable all connections without restrictions (otherwise it wouldn’t connect to the OpenAI API).
ii) An API key from OpenAI
In most countries, people can get a free API to experiment with the system with some credits charged upfront, at no cost. Check out OpenAI’s official site at https://beta.openai.com/signup.
Important: Do not give out your API key (personally or because you’ve exposed it in your JavaScript code!) because using it will burn your credit!
The examples I present here require the user to introduce their own key. The web app sends the key to the PHP wrapper which calls the GPT-3 API (note that the key is not saved, so feel free to try my code!).

iii) A PHP library to integrate with OpenAI’s GPT-3, and how to use this library from your web page’s HTML+JavaScript code.
OpenAI does not natively support PHP, but there is a dedicated community of developers who have written libraries for calling the GPT-3 API on PHP (and also from code in other languages):
Open AI API
An API to access new AI models developed by OpenAI
beta.openai.com
I tried some of the available PHP libraries and settled for:
OpenAI-GPT-3-API-Wrapper-for-PHP-8/OpenAI.php on master ·…
This file contains two-dimensional Unicode text that can be interpreted or compiled differently than what is shown below…
github.com
But I had to do some modifications in the main file called OpenAI.php. You can get the final file as I used it here.
I needed some small modifications to make it work. One of the smaller modifications was to pass the user’s API key programmatically instead of keeping it fixed. That way, users of your app don’t spend tokens from your account! The downside is that they need to get a key themselves. An intermediate solution would be to hardcode your own key into a PHP file as in the original OpenAI.php file, and then charge your users when they use your app.
There is one more file you need, which links your HTML/JavaScript file to the main PHP file that calls the GPT-3 API. This is a short PHP file with this content:
Interfacing PHP and JavaScript to run GPT-3 on OpenAI
Your JavaScript code only needs to make an asynchronous call to the complete function in the above PHP file, which is defined (in the above PHP file) on an instance object containing the text prompt and parameters which you want to pass.
If you examine the source code of the example I show you later in this article, you’ll see that it’s a bit more complicated. This is because my web app cleans the output (in data) to remove input and other stuff. It also reformats the text to display the user and chatbot names in bold, and it stores the entire set of inputs and outputs in an internal variable to give GPT-3 context for each conversation. Race can be found. Time to implement it.
4. Teaching your model factual data it doesn’t know about in the form provided by OpenAI
In other words: what GPT-3 needs to know to best meet its goals.
As I guessed above, there are two main ways to “train” your GPT-3 model with ad hoc data. One, used here and also introduced above, is called “few-shot learning”. Few-shot learning is very simple: augment your prompts (ie, input with GPT-3 questions) with a few paragraphs of relevant information.
In the example, we saw above (and that you can play with, see section 3 below), where the user will ask the chatbot about me because it has to answer for me, I gave it two paragraphs. Feed:
PARAGRAPH 1: Hello, welcome to Luciano Abriata’s website. I am here representing Luciano online. I know a lot about it – I’m an OpenAI GPT-3 model written by Luciano. Feel free to ask any questions. Dr. Luciano A. Abriata is a biotechnologist, doctor of chemistry, artist, and content creator.
On science subjects, Luciano has experience in structural biology, biophysics, protein biotechnology, molecular visualization through augmented and virtual realities, and experimental and computational aspects of science education using advanced technologies. Luciano Abriata was born in 1981 in Rosario, Argentina. He studied biotechnology and chemistry in Argentina and then moved to Switzerland where he currently works in two laboratories at the Ecole Polytechnique Federale de Lausanne (EPFL).
He works in EPFL’s Laboratory for Biomolecular Modeling and in EPFL’s Protein Production and Structure Core Facility. He is currently working on web-based methods to visualize and manipulate molecular structures in an immersive way to achieve commodity augmented reality and virtual reality tools. He also collaborates with several groups on molecular modeling, simulation, and nuclear magnetic resonance (NMR) spectroscopy applied to biological systems.
Started as a website for education and access to chemistry and structural biology through augmented reality (AR) content on regular devices such as smartphones, tablets, and computers. Runs in web browsers. . Here we present two evolutions of moleculeARweb’s Virtual Modeling Kits (VMK), tools where users can create and visualize molecules, and their mechanics in 3D AR complete with 3D custom-printed cube markers (VMK 2.0).

Move around the simulation scene by handling or with mouse or touch gestures (VMK 3.0). Molecules on the simulation experience visually realistic torsions, collisions and hydrogen bonding interactions that the user can manually turn on and off to explore their effects. Additionally, by manually tuning the simulated temperature users can accelerate conformation transitions or ‘freeze’ specific conformations for careful inspection in 3D.
Even some phase transitions and separations can be simulated. We demonstrate here these and other features of the new VMKs that link them to potential specific applications for the teaching and self-learning of general, organic, biological, and physical chemistry concepts. and assisting with small tasks in molecular modeling for research. Finally, in a brief section, we review what future developments are needed for a ‘dream tool’ for the future of chemistry education and work.
Every time a user asks something to my chatbot, my web page isn’t just sending the question, it’s linking two paragraphs of information plus any previous questions and answers to the new question. What, GPT-3 outputs is based on that data, if it finds some relevant content in it (you can still ask it about something else and it can still respond).
As I’ve argued before, “training” your GPT-3-based chatbot through “few-shot learning” isn’t very efficient because it’s limited by the amount of information you transfer, and it also uses many tokens every time you send. prompt but it’s extremely simple and practical, as you’ll see in the examples above and Section 3 below.