Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

This is how you make applications smarter by integrating OpenAI technologies.

If there is one technology that dominated the past year, it's the "smart search engine" ChatGPT. However, everything will truly accelerate when integrating ChatGPT into other applications. Rutger Van der Auwera, frontend developer at Axxes, explained at Haxx how this works.


Share article
Front End Software & Services

When Rutger Van der Auwera started as a frontend developer in 2012, artificial intelligence was still in its infancy. Siri was just a few weeks old, and no one seemed able to surpass Google's search engine. Flash forward a decade later, and a lot has changed. Rutger is currently working through Axxes at the fleet management platform Alphabet, where he prefers to work with Angular. His job is completely different from when he started because the launch of ChatGPT has completely changed the playing field.

ChatGPT is one of the many chatbots based on a Large Language Model, a technology capable of understanding, generating, and manipulating human language. You can use the tool on the one hand by visiting the OpenAI website, but on the other hand, you can also integrate it into your existing applications.

Artificial intelligence - and that goes beyond ChatGPT - can give your systems a significant upgrade. It can improve UX, provide more personalized results, make a platform more accessible by automatically translating texts, among other things. In other words, it makes it easier for users to interact with your app or tool. It's simply easier to talk to 'someone' and interact than to search through a list or use filters. And just as every company was asked if they had an app after the launch of the iPhone, they now have to explain to everyone what they are doing with AI.

At our internal Haxx conference, Rutger demonstrated how to create a smart chatbot yourself and how to integrate ChatGPT via APIs into your web applications. He also presented a demo of a smart chatbot that can recommend movies or series from a Kaggle database based on a few simple questions. In addition to OpenAI's technology, Rutger used Supabase, a Postgres cloud database, and Angular and Tailwind.


Api

This is how the OpenAI API works.

OpenAI has various applications, with text generation via ChatGPT being the most well-known: you ask something, and the tool responds. Through fine-tuning, you can also create your own ChatGPT model. DALL-E, on the other hand, is an application for generating images based on written input, while Vision can be used to describe what is in an image. Whisper does the same but with sound.

Accessing OpenAI's APIs is quite simple and done via a POST request. You send along general information, known as the role, and the content, which can range from text to a URL of images. You need to include your API Key, which you receive upon creating an account, and certain parameters.

In the case of text generation, for example, you'll need to indicate which model of ChatGPT you're using. The most well-known is GPT-3.5-turbo, but there are successors that can also interpret images. Additionally, the models have an additional number referring to the date it was released or until when the data known by ChatGPT is valid.

Other data you send along includes the conversations you have with ChatGPT and the temperature. This parameter has nothing to do with how hot or cold it is, but it's a number between zero and one indicating how creative ChatGPT can be. Through the Presence Penalty, you specify if ChatGPT is allowed to repeat the same sentence multiple times, and the Frequency Penalty indicates how easily the tool will talk about new topics. You also determine the number of tokens and responses the tool is allowed to provide. Since your pricing is based on the latter, it may be relevant to limit it.

Afbeelding1

Embeddings, images, and moderation

In addition to text generation, as mentioned, OpenAI's technology can do much more. For instance, you can convert text into a series of numbers using Embeddings. Humans cannot read these numbers, but artificial intelligence can. Based on these numbers, the tool can determine the similarity of texts. This capability allows the tool to assess whether certain words are semantically related, which is useful for search queries.

Similarly, your call again involves a post with input. Depending on your intended use, you may need to specify additional parameters. For example, if you are working with Postgres, you need to add an extension to recognize this input. In your table, you add a column for the embeddings returned via the API, which you can then store. In his demo, Rutger demonstrated how he could create a function to match movies using a stored procedure. When invoking the function, you provide it with an embedding, allowing the chatbot to know which movie to recommend.

To achieve that goal, Rutger first had to prepare his data. The titles and descriptions of a movie were sent to the embeddings API, where they were converted into numbers and stored in the embeddings column. When a user enters a query, that query will retrieve the correct movie through the embeddings API.

Want to use OpenAI's image generator? That's also quite straightforward: in your call, you provide your model along with a prompt and the number of images you want to receive. You can also determine the desired size of the image here. The data you receive is a list of URLs to images that you can use. Then, the AI itself generates an image.

Finally, OpenAI also offers a tool for moderating certain content. By providing a text via the API, this software can indicate whether it is, for example, crossing boundaries or not. You also receive the scores and the rules that are violated.

Pexels sanket mishra 16629368

Here's how to achieve the best result.

It takes some getting used to as a user to start working with OpenAI's applications, but the better you learn to prompt, the better the result will be. According to Rutger, there are some tricks you can apply. The more details you provide, the better your result will be. For example, specify the type of response you would like, as ChatGPT often gives the same answers.

Context also plays a role. For example, if you want to get movie recommendations, it's important to specify why and how you want to use those movies. It's still experimental, but you can also use ChatGPT to improve your prompt. In that case, outline your desired output and ask ChatGPT what question you should ask for that.

There are several models that you can use perfectly side by side. Pricing plays a role, but the cheaper ones will often give an equally good result. And one more golden tip: be polite. If you are friendly, ChatGPT will rely on online responses that were answered politely, and those are usually questions that were asked politely.


Bringing everything together

By combining your embeddings and query, you can also base your response on data provided to ChatGPT. In Rutger's case, you can ask ChatGPT to suggest movies from the database of series and films. When you type a query, it will convert it to embeddings, use them to search for a movie in a database, and then return that response to ChatGPT based on the available films.

Another handy application to bring everything together is Function calls. With this, you provide ChatGPT with a list of functions in your code that it can call. Based on your query, the tool itself chooses which function to call, based on your list of functions, to retrieve data.

For this purpose, OpenAI developed Assistants, a ChatGPT instance with specific instructions and tools. This includes a thread, which is a list of messages from your conversation. Finally, there is also a run, the actual processing of those threads.


Pexels pixabay 60504

Security and pricing

When using ChatGPT in your application, you must consider certain security issues. While it offers many capabilities, it's crucial to anonymize data before sending it to ChatGPT. Additionally, refrain from transmitting user data to ChatGPT. While theoretically possible to have ChatGPT generate SQL queries to access the database directly, limit its capabilities in this regard. Monitor user queries to prevent misuse.

The cost of using ChatGPT varies depending on the model and frequency of usage. Generating images can be more expensive, especially for high-definition images.

According to Rutger, Large Language Models will increasingly be utilized, although he doubts it will primarily be through OpenAI. The concern lies in the uncertainty of data handling. Consequently, more companies opt for solutions where such Large Language Models are run locally.

Want to stay updated with the latest Insights every month? Sign up now!

Rutger Van Der Auwera

Rutger Van Der Auwera

Have you checked out our other Insights yet?

Here they are
Axxes