You have 4 summaries left

Everyday AI Podcast – An AI and ChatGPT Podcast

EP 158: The ChatGPT Mistake You Don’t Know You’re Making

Tue Dec 05 2023
AIChat GPTTeslaAI AllianceLarge Language Models


The Everyday AI Show simplifies AI and makes it accessible. The host, Jordan Wilson, tests and verifies new features in AI tools. The podcast covers topics such as mistakes in using chat GPT, concerns about Tesla's AI technology, the AI Alliance by Meta and IBM, and tips for using large language models correctly.


Mistakes in Using Chat GPT

Many people make mistakes when using chat GPT, such as sharing chats or building custom GPTs without considering the issues they can cause.

Importance of Knowledge Retention

Knowledge retention is a common problem when using chat GPT or similar models, and understanding how document uploads work is crucial to avoid mistakes.

Improving GPT Performance

To improve the performance of GPT models, users should test and verify them, include conditional instructions, and monitor token usage and memory limits.


  1. Introduction
  2. Chat GPT Features and Usage
  3. Using Large Language Models Correctly
  4. Document Referencing and Knowledge Base
  5. Improving GPT Performance and Reliability
  6. Optimizing GPT Usage


00:01 - 07:11

  • The podcast is called the Everyday AI Show and it aims to simplify AI and make it accessible.
  • The host, Jordan Wilson, is a former journalist who tests and verifies new features in AI tools before sharing them with the audience.
  • There is a mistake that many people are making in using Chet Shippee Tea's chat GPT, which leads to incorrect information and hallucinations.
  • Sharing these chats or building custom GPTs can cause issues for the people using them on the back end.
  • In the AI news segment, concerns have been raised about Tesla's AI technology used in self-driving cars, with leaked data revealing customer complaints and safety concerns.
  • Donald Trump claimed that an ad from The Lincoln Project used AI to make him look bad, but it actually used real footage of him highlighting gaps in his presidential campaign.
  • Meta and IBM have launched the AI Alliance, a group of leading organizations focused on open innovation and science in AI. They aim to develop benchmarks, support global skills building, and promote responsible use of AI.
  • The host encourages listeners to sign up for the free daily newsletter at for more detailed breakdowns of each podcast episode.

Chat GPT Features and Usage

06:54 - 13:48

  • Chat GPT has a free version and a paid version called Chat GPT Plus, which costs $20 per month.
  • Due to recent updates, there is currently a waitlist to sign up for Chat GPT Plus.
  • The default mode of Chat GPT now offers new functionality such as document uploads, browsing with Bing, advanced data analysis, and using Dolly.
  • Custom GPTs allow users to train their own version of chat GPT by uploading documents into its knowledge base.
  • OpenAI plans to release the GPT store in January or early 2024, allowing users to share their custom GPTs.
  • The process of uploading documents and setting up custom GPTs has become easier than before.
  • Many people lack understanding about how document uploads work in ChatGPT and make mistakes when using it.
  • Knowledge retention is identified as one of the biggest problems or mistakes that people make when using chat GPT.
  • There is still a limit on the memory capacity of Chat GPT, currently at 32K tokens or about 24,000 words.

Using Large Language Models Correctly

13:21 - 20:28

  • The transcript discusses the importance of using large language models correctly, such as chat GPT or other models.
  • There is a difference between marketing claims and real-life use cases for these models, with some companies exaggerating their capabilities.
  • Knowledge retention can be a problem when using chat GPT or similar models.
  • Uploading documents in default mode allows users to create content connected to the internet, but there are limitations.
  • In default mode, uploaded documents may not be retained permanently, and there are differences compared to custom GPT handling of uploaded documents.
  • Testing and experimentation are necessary to understand how chat GPT responds to different queries and prompts.
  • Chat GPT is essentially an advanced auto-complete tool with 1.8 trillion parameters.
  • To fix issues with document referencing in default mode, users need to make specific calls or prompts to guide chat GPT's responses.
  • Memory limitations should be considered when working with large language models like chat GPT. Re-uploading documents may be necessary once the memory limit is reached.

Document Referencing and Knowledge Base

20:05 - 27:34

  • When working in default mode and trying to tap into knowledge from an uploaded document, you may need to re-upload the document once you exceed chat GPT's memory limit.
  • In default mode, when referencing an uploaded document, chat GPT will show a timer indicating that it is reading the document.
  • Even if you upload a document and tell chat GPT to retain it in memory, it may still not remember unless you reference it directly or include it in the inline text.
  • It is important to keep an eye on token usage as GPTs can have issues with this as well.
  • Double testing has shown consistent issues with custom GPTs when using separate documents and the same writing samples document.
  • Developers should take their time and build GPTs correctly to avoid issues. Reminding the custom bot to check the knowledge base regardless of user query can decrease the likelihood of encountering problems.
  • If a GPT is not properly configured, it can be easily confused or fail to perform basic functions.
  • Uploading a PDF and then asking unrelated questions may cause chat GPT not to call upon the knowledge base for answers.

Improving GPT Performance and Reliability

27:12 - 33:58

  • The GPT model has limitations in accessing personal details unless explicitly provided by the user.
  • There are potential issues with the context window and combination of queries that can affect the accuracy of GPT responses.
  • The upcoming GPT store will offer custom models, but users should be cautious and only buy from reputable sources.
  • Blindly trusting GPTs without thoroughly testing them can lead to errors and incorrect responses.
  • Conditional instructions can be used to improve the accuracy of GPT responses by directing it to specific documents based on user inquiries.
  • OpenAI is working on improving the reliability and performance of GPT models.
  • Chat GPT may miss or skim over content, making it important for users to verify its responses.
  • Token usage tracking is not a built-in feature in Chat GPT, but external token counters can be used to monitor token usage and memory limits.
  • In custom GPTs, it is recommended to include instructions for checking the knowledge base first before providing information from other sources.
  • Testing and verifying custom GPTs is crucial to ensure their reliability, regardless of whether they are built by oneself or obtained from others.

Optimizing GPT Usage

33:39 - 36:31

  • The more resources required for GPT to process, the less likely it is to provide a positive return.
  • Depending on the documents and their setup, it may be necessary to break up large documents into smaller ones and configure instructions for specific queries.
  • Consistent outputs indicate no issues with GPT performance.
  • Listeners are encouraged to subscribe, rate, and share the podcast with others who use or build GPTs.
  • Tomorrow's episode will feature a leader from Microsoft discussing how co-pilots can benefit leadership and learning.
  • The podcast concludes with a reminder to subscribe, rate, and visit the website for more AI content.