Skip to Main Content
Club Media, Colleges Media, Federations & Rights Holders, Publishers

The Future of Sports Coverage: Part 1

Stats Perform’s Chief Scientist, Patrick Lucey Talks ChatGPT, Deep Learning and AI Technologies in Sport.

By: Patrick Lucey

OpenAI’s ChatGPT launched towards the end of 2022. The ‘conversational chatbot,’ trained on an enormous amount of the internet’s text, quickly accumulated millions of users keen to experience a new breakthrough in the day-to-day utilization of AI technologies.  

But is ChatGPT relevant to sport? Does it complement or compete with Stats Perform’s own machine learning models, trained on our own enormous amount of sports data? 

Spoiler: ChatGPT utilizes some of the same deep learning approaches that the Stats Perform’s AI team is using, but with different inputs and for a different objective. Those same AI techniques can in fact be found across the tech world, solving numerous problems. Here we’ll use ChatGPT to explain these underlying AI technologies as well as ChatGPT’s value, its limitations, and where it could be relevant in the sports industry.  

In a follow-up article we’ll also address a few current and future uses of similar deep learning methods by Stats Perform to solve problems for sport. 

 What is ChatGPT? 

ChatGPT is a model that has read most of the text on the internet to understand how words, sentences and paragraphs are structured, so that it can accept questions as text inputs and generate coherent-sounding answers in the form of dialogue-style text outputs.   

It uses ‘Generative AI’ to generate predictions of the next words in a sequence that are statistically likely follow-ons from a wide range of given input prompts, based on the context of previous words. Its output takes the form of fluent, natural-sounding text responses. 

In technical terms, ChatGPT is a variant of a Generative Pre-trained Transformer (GPT) large language model (LLM) that uses prompt engineering to coax it into producing conversation-like behaviour.  

Language models aren’t new, but the amazing thing about the GPT-3 LLM used by ChatGPT is the fact that it is really large: not only the data it is trained on (i.e., all the text data on the internet) but the number of parameters it utilises (175 billion). It uses a transformer network approach to learn correlations and patterns in this vast text repository that seem superhuman.  

By cleverly crafting the input question to describe the required content and style of the response, known as prompt engineering, ChatGPT is able to select a small number of textual samples that it was trained on that display the required style and content, and synthesize these samples to generate essay-like outputs for text; articles and answers to questions in a way that mimic human language.  

What else can Generative AI do?

Large Language Models are one type of Generative AI models. They can also be prompted to take in text inputs and generate other outputs, such as images (eg OpenAI’s DALL-E-2) and video (eg Meta’s MakeAVideo). Even robotics is utilizing this technology, with the release of RT-1, which is a transformer that takes natural language text instructions and images as inputs, and can be prompted to directly generate real-world robotics tasks as outputs. 

LLMs are also used for content understanding/translation/question-answering for specific domains (e.g., extract, summarizing and encoding clinical records, or legal documents) and customer service. An LLM powers an assistive tool for coding e.g. Github’s Copliot, which can flag errors and generate or auto-complete code, based contextually on the specific API being worked with. 

The limitations of ChatGPT 

Generative AI does what it says on the label – it generates predictions.   

 ChatGPT knows about words, and which words are likely to appear together in sentences and in what order. The actual word it ultimately chooses doesn’t matter, as long as the word within the entire sentence sounds plausible, which is a problem as it will often “hallucinate” some of these words which are the facts – and facts can’t change.  

In addition to the problem of hallucination, ChatGPT writes responses in such an authoritative way that incorrect answers will still sound believable, especially for non-experts.   

This is not only an issue for ChatGPT, but Generative AI tools overall. It is really good at faking real content (e.g., deep fakes in images, videos and audio). Like all tools, you have to know what technology can do, but most importantly what it can’t do.  

What can ChatGPT do (and what it can’t do) in sport? 

First, ChatGPT has only been trained on words up to September 2021, so it can’t give any recent answers. Secondly, GPT-3 requires text as the input. The language of sport is different to text, and at Stats Perform’s we have created the ‘language of sport’ which powers our AI. And finally, but importantly, ChatGPT is not optimized to ensure the ‘answers’ it predicts are factually correct.  

ChatGPT knows nothing about rugby or cricket, basketball or football, countries, cups, bats, balls or anything else about the world.  It hasn’t been trained on facts and stats or to recognize accurate information upon which to generate its predictions. It only knows about the order of words.   

This makes it restrictive in domains like news and sport, where results are results and stats are stats, especially as ChatGPT can generate very well-written articles that seem believable but are not based on what actually happened. 

You can see this by asking it very specific queries, such as this one about a Rugby player in the 2019 World Cup:   

The first two sentences in this well-written answer are factually incorrect – Cheslin Kolbe scored 2 tries against Italy in the group stages and a try in the final against England.  

Is there a role for ChatGPT in Sport?   

Powered by structured, accurate and up-to-date data and content feeds, a future version of ChatGPT could in theory quickly produce text articles to update fans, triggered by a text question such as ‘write me a preview for all of tomorrow’s games’.   

However automated machine-written articles like these already exist. They don’t use the same ‘conversational’ processes that ChatGPT provides – but they are factually correct. 

It is a nuance, but ChatGPT generates both the narrative, and generates the information within the narrative. This makes generative AI problematic for synthesizing news reports, including sports reports.  

Instead, specific fact-based products like Stats Perform’s Automated Game Previews first use the stats and facts from our sports data feeds as the seeds of the story, and then build the narrative around that concrete information.    

Because our tools start with our data feeds, and those data feeds are structured and accurate, this means the articles that are written don’t change and they aren’t ‘generated’ predictions – they are what happened. 

Other automated sports report-style services that use our data feeds as their basis include ‘Pressbox Live’, which generates real-time insights for TV commentators and ‘Pressbox Graphics’, which produces automated images for social media or blogs whenever a goal, touchdown or basket is scored. 

What’s Next?

In Part 2, we look at the underlying AI behind ChatGPT and reviews how it is also already in use to elevate sport experiences, particularly in the high-performance world.

In Part 3, we address the implications of the latest ChatGPT-4 model and what it changes for sport.