We are privileged to live in an age of information, yet it can be difficult to know how to sort through and use all of the information available to us. With the advent of widely available AI tools, that challenge has only increased.
AI is a powerful tool that can be used well or poorly. The purpose of this article is to provide debaters with some general advice on how to use AI well in debate. I offer this advice in my capacity as a debate coach and am not speaking authoritatively on behalf of NCFCA.
When thinking about how to use AI well, first keep in mind that we are ambassadors for Christ seeking to communicate truth with integrity and grace. Representing Christ means that we should be committed to integrity, including academic integrity, and that we should be committed to speaking only what is true.
It’s also important to have an understanding of AI tools and how they work. This article focuses on the use of LLMs, or large language models like ChatGPT. LLMs are only as good as the data that has been used to build the model upon which they are based. They cannot currently generate new information; they can only synthesize data and predict what language most people might use in response to a prompt.
As a result, an LLM is a helpful tool when it is used to synthesize, summarize, or organize information that you feed it but not when it is used to generate original information. It can also generate ideas for research and writing, but it cannot and should not replace your own research and writing.
For example, if you asked ChatGPT to write a four-page argumentative paper (an affirmative case) on reforming Congress with evidence and citations to prove each point, it could create what would appear to be a paper with real evidence and citations and yet could be anything but real. LLMs work by mimicking what real people have written in research papers but without controls to ensure that the sources, quotations, or data are real. For this reason, you shouldn’t use LLMs to generate evidence for debate.
You should also use caution when using LLMs to generate ideas for debate. An LLM can be very helpful as a brainstorming tool; you might prompt ChatGPT to list ideas for reforming Congress or reasons why governments should prioritize international cooperation in space exploration. When you get that list, treat it as a starting point for research. You will need to analyze those ideas using specific research from real experts to thoroughly understand how each idea works and fits under the resolution. You will also need to support those ideas with direct quotations from credible human sources.
You can also use LLMs to synthesize and organize your research or thoughts. For example, you might enter all of your ideas for your case, the quotations you want to use, the arguments you want to make, and then ask the LLM to create an outline for your speech. Or, you might enter a bunch of research you’ve been doing and then prompt it to highlight the three most important points from that research.
LLMs can also help you generate citations from URLs of articles that you want to cite in debate. When you do this, be sure to double check that the citations are correct.
One final caution on the use of AI in debate–it is possible to use an AI tool to “listen” live during your debate round and then prompt it to give you summaries of speeches or write the outline of your next speech. Accessing AI for any purpose during a debate round would violate NCFCA Debate Rule A4:
“Electronic Devices. Debaters may use electronic devices during the round but may not use them to research or to request, send, or receive information during the debate round with the exception of evidence exchange and communication between partners during online tournaments as permitted by the rules.”
There are also safety and ethical considerations when using AI which are beyond the scope of this article. Parents and coaches should do their own research on these topics and decide whether to allow their students to use any particular AI tool.
In summary, AI can be a useful tool as you prepare for debate when you use it to help you brainstorm ideas for research, summarize or organize information that you enter into the tool, or create citations from URLs that you enter. Even in these capacities, you should always double-check work generated by an AI tool. Using AI to generate evidence or write your case is problematic because LLMs can only predict language, they cannot ensure that the material they generate is accurate and supported by credible sources. For Christians committed to communicating truth, we must commit to using AI tools in a way that supports rather than undermines truth.
Disclaimer: The information in this (or any other) blog article is intended for explanatory purposes and to promote good conversation and competition in our league. Blog articles are not intended to interpret, augment, or supersede our League Policies or Competition Event Rules.




