Building Better GPTs: Lessons from ‘Secret Weapons’ in Software

Behind the scenes

I feel like I discovered some buried treasure this week. What do I mean by this?

Software’s buried treasure

Well, for a long time, I’ve felt that one of the problems with a lot of technology, software tools and apps is that their real strength and power gets buried away in all the gloss of marketing and the mistaken belief that all users want to do is point and click as quickly and easily as possible.

Don’t get me wrong. Without being able to point and click successfully, you are finished. No-one is going to use your tool or technology, if it’s a pain to navigate around. But what if, in the rush to simplify, to show potential users how easy it is to point and click, you end up glossing over some real technology gold. Things that if understood by users would actually 10X their abilities and the power of said technology or software.

Buried treasure examples

Here are a couple of cases in point, about a piece of software many of you will be familiar with – Articulate’s Storyline. The first example is Storyline variables. I’ve jokingly called them Storyline’s ‘secret weapon’ before now. But seriously, once you master variables and understand what you can do with them, the e-learning design options open to you increase exponentially.

But the power and potential of variables is largely glossed over in the help content that Articulate provide to Storyline users.

The other Storyline example is Masters and Layouts. Incredibly powerful and a great time-saver once you really understand how they work and what they can do. Again, largely glossed over in the help content.

Custom GPTs’ buried treasure

And with all this preamble, you have probably realised by now that I believe I have stumbled across a similar ‘secret weapon’ in relation to AI and building custom GPTs. And you’d be right. I believe I have.

The buried treasure I believe I unearthed this week is connected to how you provide knowledge to a custom-built GPT – something that will be pretty important to users of PerformaGo.

As you may be aware already, LLMs are notorious for their tendency to hallucinate. In plain English, their tendency to make stuff up when they are not sure about something or they lack access to accurate or relevant information.

One of the key points about custom GPTs in a workplace performance support setting is that they will need to access specific pieces of information accurately and not just make stuff up.

Controlling the hallucinations

Now in theory, the way to avoid this when building a custom GPT is to add your pieces of relevant content into a knowledge base attached to the GPT. This ensures that everything will be fine.

In reality, even with this more controlled environment, you will still struggle to get the GPT to consistently output the answers you would like, with the right level of accuracy and detail.

And the reason for this, as I discovered this week, is related to factors such as how you control input and output; the way knowledge base content is structured; and how you set up the GPT to query that structured knowledge.

This is an absolute treasure trove of control that is either not available or largely glossed over in many generalist ‘build a custom GPT’ tools.

Digging up the treasure trove

So, rest assured this gold will absolutely find its way into PerformaGo. It might not all be there in early releases but it will eventually make the cut. And that gold will be explained properly and (with luck) be made easy and intuitive to use.

So, this week has been a big breakthrough moment – a realisation that some of the frustrations I’ve been experiencing with accuracy and relevance of output are largely fixable – as long as you know how.

Until next time…


PS Since writing this post, I’ve gone even deeper on this topic. So look out for diary posts in the ‘Learning to Speak API’ section, where I’ll be writing more about the technical aspects of all this.

My Most Recent Diary Entries...

Diary Entries from the Last 3 Months...