top of page
uintent company logo

AI & UXR

Everything You Need to Know About Tokens, Data Volumes and Processing in ChatGPT


4

MIN

Nov 26, 2024

Introduction to tokens and processable data sets 

When working with ChatGPT, you will quickly come across a central concept: tokens. But what exactly are tokens, and why are they important? Tokens are the smallest units of information that the model can process – these can be whole words, parts of words or even punctuation marks. So the length of a token varies depending on the language and context, but on average, a token can be said to be about 4 characters or 0.75 words.


Why is this relevant? Because the maximum number of tokens that ChatGPT can process in a conversation or analysis determines how much information can fit through the model at once. Currently, the token limit is 8,192 tokens for ChatGPT 4 and 128k tokens for ChatGPT 4o.

This means that the entire content – including your questions or data and the answers that ChatGPT generates – must not exceed this limit. This token limit naturally affects how long a single conversation can be before older parts of the conversation are ‘forgotten’.


For comparison: 8,192 tokens correspond to about 16 to 20 book pages and 128k tokens correspond to about 250 to 300 pages of an average book, with a book page containing between 250 and 300 words. This calculation shows you that the models can process quite a bit of information in one go – but with long texts or complex data, this limit can also be quickly reached.


Dealing with large amounts of data in ChatGPT 

Let's say you want to analyse an entire chapter of a book – in principle, no problem! But what happens if the chapter is longer than 8,192 or 128k tokens? In such cases, ChatGPT cannot process the data in one go. A common assumption is that the model will simply split the data into digestible sections on its own – but this does not happen automatically.

You have to manually split the data into smaller sections and control the flow.


Here are a few tips on how best to do this:

  • Segment your text into thematically meaningful sections: Instead of sending everything at once, divide the text into smaller blocks that are coherent and easier to digest.

  • Link the sections together: To ensure that the context is not lost, briefly summarise what has been discussed so far at the beginning of a new section. This helps to maintain the context.

  • Identify key information: If you know that certain parts of the text are more important than others, focus on them first. This way you can use the token limit more efficiently.


Strategies for optimising the use of the token limit 

  • Focus on important data: To use the tokens efficiently, you should identify the most important points before sending the text. This will save you space and quickly get responses on the topics that really matter.

  • Summarise where possible: If you have a huge amount of data, summarise the text to a minimum. The aim is to pack as much as possible into the token limit without losing context.

  • Iterative processing: If all the context is important but the data set is getting too large, process the information iteratively. That is, submit the data in parts and provide a brief summary of the most important information after each section so that the overall context is preserved.


Time dependency of processing 

You may be wondering: ‘What happens if I take a long break in a chat? Will ChatGPT forget everything?’ The good news is that processing is not time-dependent.

Whether you respond in minutes, hours, or even days, as long as the chat remains open and the token limit is not reached, the context will be preserved.


This means that long breaks won't affect the chat. Nevertheless, in very long chats, it may happen that earlier information is ‘forgotten’. Why? Because the token limit also applies to the entire chat history.

When the limit of 8192 or 128k tokens is reached, a so-called ‘memory loss’ is applied: older parts of the conversation are removed to make room for new content. That's why it makes sense to summarise the chat regularly or repeat important points.


Another detail: If you process large amounts of data in smaller sections, it is helpful to always clearly indicate how the sections relate to each other. This helps ChatGPT to understand the context and process the data correctly.


Feedback at token limit 

There is one important thing you should know: as soon as the token limit is reached, ChatGPT will let you know. This is so that you are informed in good time and the context is not unexpectedly lost. You then have the option of summarising parts of the conversation, removing irrelevant information or taking other measures to ensure that the conversation can continue efficiently.


Practical tips and best practices 

To get the most out of ChatGPT, it helps to focus on the context and relevance of the information. The accuracy and precision of the data you send to ChatGPT directly affect the quality of the analysis. Therefore, it is worth preparing the data well before sharing it in the chat.

If you're working with particularly large amounts of data, it can be useful to use external tools to analyse, shorten or summarise the data before sending it to ChatGPT. This way, you can make optimal use of the space in the token limit.

For long chats, it's always a good idea to repeat key points or provide summaries periodically. This keeps the context clear and ensures that ChatGPT stays on top of things.


If you're wondering, there is no hard rule for token usage per message.

Sometimes a simple question can consume only a few tokens, while a complex question or long answer can require several hundred tokens. The important thing is simply to keep an overview so that the token limit is not reached too early.


Outlook for future developments 

Of course, it would be nice if we never reached the token limit. In fact, there are already plans to increase the amount of data that can be processed in future versions of ChatGPT. Let's see what the ‘4o3’ model brings us ;-)


Technical statistics and details of this chat 

By the way: This text is about 1,600 tokens long. And the chat I used to develop this post used about 1,000 tokens. A good tool for ‘token counting’ is https://platform.openai.com/tokenizer. Sometimes ChatGPT itself can't do it.

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

A woman inside a gear is surrounded by icons representing global connectivity, collaboration, innovation, and user focus – all linked by arrows. Uses soft, bright colors from a modern UI color palette.

Automate UX? Yes, Please! Why Zapier and n8n Are Real Super Tools for UX Teams

CHAT GPT, TOOLS, AUTOMATION, AI & UXR

A 2D Image of a man, pointing to a screen with a surgical robot on it.

Surgical Robotics and UX: Why Usability Is Key to or Success

HEALTHCARE, TRENDS, UX METHODS

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 5: The Future Starts Now – UX in Transition and Tara Right in the Middle of It

UX, BACKSTORY

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 4: A New Outlook on Life – Tara, the Transition and Becoming Visible

UX

Isometric illustration of digital health devices, including a blood glucose meter, thermometer, smartphone with health data, blood sample, and medicine bottle.

Telemedicine 2025: Between Potential and Practice – How UX Research Strengthens Healthcare

DIGITISATION, HEALTHCARE, MHEALTH

Stylized illustration of a brain and a neural network representing AI and machine thinking.

How a Transformer Thinks – And Why It Hallucinates

AI & UXR, LLM, HUMAN VS AI, OPEN AI

Split Image with a confused man on the left with the titel "before" that looks on to a screen that says "AI". On the right is a men that looks  content on the same screen and has the titel "After UX Testing".

AI Diagnostics in Transition: Between Technological Precision and Human Trust

HEALTHCARE, TRENDS, UX METHODS

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 3: From Corporate Life Back to Freedom: How Frustration Led to the Idea for Resight

UX, BACKSTORY

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 2: Self-Denial, Growth and Crises: The Second Phase of Sirvaluse – And of Tara

UX, BACKSTORY

Colorful 3D illustration of a digital workspace with data and tools.

Making Better Use of UX Knowledge: What Repository Tools Can Do – And What They Can’t

AI & UXR, TOOLS, CHAT GPT, LLM, OPEN AI

Three stylized characters with speech bubbles on a blue background – “Chattable Personas”.

Artificial Users, Real Insights? How Generative Agents Could Change the Field of UX

AI & UXR, HUMAN VS AI, LLM, TRENDS, UX METHODS, PERSONAS

Surgeon in the clinical operating room, surrounded by a sterile surgical environment, lots of equipment and monitors

"Good Enough?": Balancing compliance and pragmatism in medical device user research

HEALTHCARE, FORMATIVE EVALUATION, SUMMATIVE STUDY

Colorful illustration of a robot with a document and pencil on a light background.

Write More Clearly With Wolf-Schneider AI – A Self-Experiment

AI & UXR, OPEN AI

Dark designed picture as a podcast announcement. A picture of a baby and a man is shown.

Episode 1: ‘Inside and Out’ – A Podcast Series About Change, Responsibility and Self-Discovery

UX, BACKSTORY

Illustration with five colorful icons on a dark background, representing different AI systems.
Top left: Brain and lightbulb – “GenAI: Creativity”.
Top center: Book with magnifying glass – “RAG: Knowledge”.
Top right: Flowchart diagram – “MCP: Structure”.
Bottom left: Code window with arrow – “Function Calling: Access”.
Bottom right: Smiling robot – “Agents: Assistance”.

Five Types of AI Systems – And What They Do for Us

AI & UXR, CHAT GPT, LLM, OPEN AI

Illustration of a stylized brain (LLM) between a human profile and a bookshelf with a magnifying glass – symbolizes AI accessing external knowledge.

RAGSs Against Hallucinations – Well Thought Out, but Not Good Enough?

AI & UXR, CHAT GPT, LLM

Two people sit at a breakfast table using a tablet with the 'ZEITKOMPASS' app by Inclusys, which displays a colorful daily schedule and clock. The table is set with bread rolls, fruit, and coffee.

UX for Good With INCLUSYS: How We Learned to Better Understand Barriers in Everyday Life

ACCESSIBILITY, ADVANTAGES USER RESEARCH, RESEARCH, UX FOR GOOD

Woman in an orange shirt sits on a blue couch and looks at an interviewer and is laughing.

UX in Healthcare: The Essentials of Conducting Interviews With Patients

BEST PRACTICES, HEALTHCARE, RESEARCH, UX INSIGHTS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page