I’m Afraid My Teacher will Accuse me of Using AI

What About the Really Good Kids?

How much time and energy are teachers putting in to catching the cheaters without considering the ramifications of threats about AI punishments on the really good students?

When teachers give long threatening speeches about what will happen to students who use generative AI, what impact does that have on students who would never do something like that?

IT IS TERRIFYING TO THEM.

They fear being accused of doing something they did not do. They fear that their academic achievements could be attributed to a machine, and how will they ever convince their teachers (or their professors) that they did not actually commit this act of academic misconduct? They are afraid to even touch the tools in case someone believes that they cheated.

It’s Easy to Focus on the Problem

In our classroom spaces, it’s easy to go down the rabbit hole of trying to prevent students from using AI to complete their work.

  • How many students in your class do you think would actually do it?
  • What is the cost to your high flyers when the focus is solely on inappropriate usage of generative AI? 
  • If we teach students to fear AI, are we preparing them for a world where AI is becoming increasingly ubiquitous?
  • Who is going to teach this generation of students how they CAN use AI?
  • Have you ever examined the flip side of this question?

Unintended Harm

This post is not about shaming teachers for trying to take control of a new disruptive technology. Generative AI has disrupted the process of teaching and learning. It has brought new challenges and new considerations and we are all figuring this out together.

In the process of conducting my research for my Doctorate in education, I’ve been fortunate to benefit from conversations relative to both K-12 education and post-secondary education, and I’ve come to realize that there is harm to students who hold themselves to a high academic standard when teachers and professors threaten with ramifications that may come to pass if the teacher suspects the student of cheating on a written assignment.

Can we Find a Happy Medium?

Teachers, you do need to have a plan in place for those times that a student does engage in some academically dishonest behaviours. Just like you have a plan in place for other behaviour infractions.

If you’d like some thoughts that may be helpful as to how you might navigate this challenge gracefully, please take a look at my blog post titled AI Detection Tools; it does not advocate for using those tools – there are far too many false positives in that environment – but it offers a script that in almost any conversation will let you get to the bottom of the issue without burning bridges or destroying your teacher-student relationship.

These challenges can be navigated. Kids behaviour needs to be corrected sometimes. But we don’t need to let AI take dignity away from either us, our curriculum, or most important of all, our students.

AI Detection Tools

Students and Generative AI

When the Turing Test was passed in November 2022 with the release of ChatGPT, things changed for teachers; especially teachers who rely on the essay as being their “gold standard” for assessment. Suddenly students can utilize generative AI to complete written work for them, leaving some teachers floundering.

AI detection tools like Turnitin or GPTZero are tempting to use. The teacher takes the studen’s written work, loads it into one of these detection tools, and the tool confirms or refutes the teacher’s perception that the written work may have been completed by AI. Easy, right?

There are actually a few problems in this scenario. We’ll go through them one at a time here.

They don’t work

It has been shown repeatedly in the empirical literature that AI detection apps fail, indeed research reveals that these detection tools remain unreliable (An & James, 2025; Moorhouse et al., 2023). Classroom climate can be quickly flushed by wrongly accusing a student of utilizing generative AI for an assignment.

The teacher-student relationship takes time to develop and it serves a powerful pedagogical purpose in the classroom. When the relationship is destroyed, it impacts not just the teacher and one student, it can have much larger impacts than that, and it would be a genuine shame for this pedagogical tool to be obliterated by a false result from an AI detector.

It’s an “Arms Race”

Villasenor (2022) stated that in the arms race between writing tools and detection tools “the AI writing tools will always be one step ahead of the tools to detect AI text” suggesting that as fast as the detection tools can catch up to the generative AI tools, the tool students use to write will also be moving ahead. Van Dis et al (2023) noted the same, stating that “such detection methods are likely to be circumvented by evolved AI technologies and clever prompts” (p. 225). This alone suggests that these tools will not yield the result that the sleuthing teacher is seeking.

One Step Ahead

It should also be noted that students who would choose to use AI to complete their writing for them will likely also use social media apps like TikTok to gain new techniques to conceal or “humanize” the text they are intending to submit. There are many content obfuscation techniques that a student may put into play if they are the type student who would undertake such an action.

How Prevalent is the Problem?

It’s difficult to get a gauge as to the actual numbers of students who will choose this method of cheating on their schoolwork. The companies who make these apps to detect academic. misconduct have impetus to make claims that are higher than what the literature would indicate the incidences of actual cheating usage, as they are selling a product. It’s in their best interest to make claims about the extremely high numbers of “gotcha” documents as a means of convincing potential customers that their product is valuable.

Should Students use Generative AI?

According to Weber-Wulff et al (2023), “the use of AI tools is not automatically unethical. On the contrary, as AI will permeate society and most professions in the near future, there is a need to discuss with students the benefits and limitations of AI tools, provide them with opportunities to expand their knowledge of such tools, and teach them how to use AI ethically and transparently” (p. 2). One of the basic beliefs teachers hold is that they are preparing students for the real world, and to that end, it will be important for teachers to make their peace that AI is here to stay, and in order to appropriately prepare students for a future that will include AI, it will fall to teachers to adjust the means of accomplishing assessment in ways that are beyond the essay. This is not to say that we must abandon the essay as an assessmentt mechanism; but it does demand some innovation and rethinking on the part of teachers everywhere.

So yes, students should use AI when it is appropriate to do so, and teaching students HOW to use AI ethically and appropriately is going to fall on the shoulders of teachers to do this.

So What Should a Teacher Do?

AI detectors don’t work, and a false accusation can destroy the teacher-student relationship. So, what should a teacher do if they suspect that a student has utilized generative AI to write a document that is part of the class assessment?

This is actually where you need to lean on the teacher-student relationship, and it can actually be an opportunity to further build that relationship, believe it or not.

But you’re going to have to play “Columbo” for a few minutes.

(Apologies to the younger generation who are not so familiar with the old Columbo movies. Columbo was a popular American mystery TV series where the title character, a seemingly bumbling but highly intelligent LAPD homicide detective, solves murders; basically seeming like he just couldn’t put the pieces of the mystery all together, when in reality he was a homicide detective who appears unassuming and disorganized to hide his sharp, observant mind.)

What I mean is that you’ll need to have a conversation with the student you suspect of having used generative AI to write their work, and you may have to lay it on a little bit thick. Maybe try something like this:

The Script

Teacher: So, I read your essay over the weekend, and wow!!! Has your writing ever improved this year!! When I compare what you turned in for this essay to the work you were writing in September (pick your date/time in the past) I am blown away.

(watch for signs of discomfort; fidgeting, facial redness, beads of sweat, aggression)

Teacher: Here’s the thing though, it’s my job to teach and assess the curriculum, and because your writing has improved so dramatically, it’s my job to make sure that you understand the outcomes on the [insert name of course you are teaching said student] program of studies. So, I’m going to ask you some clarifying questions to ensure that your comprehension of the curriculum is at the level that this writing would suggest that it is.

(continue to watch for signs of discomfort)

Teacher: When you wrote [insert phrase from student writing that seems unlikely that they actually wrote] what did you mean? How did you draw that conclusion? [Ask any question that occurs to you with respect to the writing they submitted.]  Just be Columbo. Be confused, don’t reveal your cards, and don’t make an accusation. 

At this point you will be precipitously close to having the student confess.

Ask another clarifying question. If the student actually wrote the work, they should have no problem answering your questions, and you should be able to actually mean your compliments of their writing if they are able to answer your questions.

If the student cannot answer your questions, but will not confess, provide them with a sheet of paper and a pen or a pencil and ask them to write a summary paragraph that would allow someone who has never heard of [insert topic of the essay here] before to understand the fundamental premises of the essay. 

Again at this point, you’re on the verge of the truth here, and no accusation has been made to the student. 

Factor what they write down on that sheet of paper into their grade.

Scale this to your Entire Class

You may want to consider scaling this summary task to your entire class. Have every student complete this task in the moments after they submit their essays to you for grading. It matters not whether they submit their work to you in a LMS like Google Classroom, or if they print their work and hand it in. Ask every student in your class to take out a single sheet of paper and a pen or a pencil. They need to write a summary of their essay in class without the essay or a computing device. Just a pen and paper summary of what they just handed in to you. 

If you build the accountability in to your system, they’ll choose to use AI for someone else’s essay. You won’t be the target of this misbehaviour for long.

References

An, Y., & James, S. (2025). Generative AI Integration in K-12 Settings: Teachers’ Perceptions and Levels of Integration. TechTrends. https://doi.org/10.1007/s11528-025-01114-9

Moorhouse, B. L. (2024). Beginning and first-year language teachers’ readiness for the generative AI age. Computers and Education: Artificial Intelligence, 6, 100201. https://doi.org/10.1016/j.caeai.2024.100201

Van Dis, E. A. M., Bollen, J., Zuidema, W., Van Rooij, R., & Bockting, C. L. (2023). ChatGPT: Five priorities for research. Nature, 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7

Villasenor, J. (2023). How ChatGPT Can Improve Education, Not Threaten it. [EB/OL] [2023-05-14]. Available online at: https://www.scientficamerican.com/article/ how-chatgpt-can~improve-education-not-threaten-it/

Algorithmic Governance, Epistemic Fracture, Surveillance and Visibility

Introduction

We are living in a time when the pace of technology moves so quickly that all sectors of society are in constant flux, adjusting to the changes that continually roll out from technological innovators. To situate the pace of technological transformation, we need only consider that in 1965, microchip engineer Gordon Moore, cofounder of the Intel Corporation, famously observed that the number of components on a microchip were doubling every year, which resulted in technological advancements continually improving while simultaneously becoming more affordable. Shalf and Leeland summarized Moore’s Law as the prediction “that this trend, driven by economic considerations of cost and yield, would continue for at least a decade, although later the integration pace was moderated to doubling approximately every 18 months” (2018, p. 14). This already incredible rate of change has brought forth new challenges and considerations to countries and cultures everywhere. Modern humans are inundated with information, news, communication, and a wide array of other notifications from all manner of devices. With this ease of information flow and data consumption, new challenges have arisen, not the least of which is the concept of algorithmic personalization, also referred to as algorithmic governance, or algocracy (Aneesh, 2006). Defined as “the probability that a set of coded instructions in heterogeneous input-output computing systems will be able to render decisions without human intervention and/or structure the possible field of action by harnessing specific data” (Issar & Aneesh, 2022, p. 3), algorithmic governance exists, behind the scenes, and largely unnoticed in many of our digital interactions. Notwithstanding the fact that “algorithms are a powerful if largely unnoticed social presence” (Beer, 2017, p. 2), they appear to not be a topic of concern to many people beyond those who work in technology. Regardless of the lack of popular concern, the algorithms that operate in the background of the technologies we engage with are a powerful social influence, holding the potential to control the flow of information (Alkhatib & Bernstein, 2019; Harris, 2022; Hobbs, 2020), the credibility of the information (Connolly, 2023; Blake-Turner, 2020; Harris, 2022; Hobbs, 2020; Issar & Aneesh, 2022), the surveillance of the people (Issar & Aneesh, 2022), and the visibility of the people (Bucher, 2012; Hoadley, 2017) who use the technology. The fact that “authority is increasingly expressed algorithmically” (Pasquale, 2015, p. 7-8) should present concerns to learning scientists, as hegemonic processes, epistemic stability, obscured voices, and human agency sit at the core of the Learning Sciences, as it aims to “productively address the most compelling issues of learning today” (Philip & Sengupta, 2021, p. 331). Though some authors use the term ‘algorithmic personalization’, to continue to underscore the power wielded by the ubiquitous algorithms, I will use the term algorithmic governance throughout this paper.

Flow of information and misinformation

The first topic to address is that of the flow of information as “today, algorithmic personalization is present nearly every time users use the internet, shaping the offerings displayed for information, entertainment, and persuasion” (Hobbs, 2020, p. 523). This brings forward the obvious epistemic question: who decides which items are brought to the user’s attention, and equally importantly, what is not brought to the user’s attention? The lack of transparency of the algorithms (Alkhatib & Bernstein, 2019, p. 3) coupled with the fact that even those who create algorithms cannot fully understand the machine-learning mechanisms by which the decisions are reached (Hobbs, 2020; Rainie & Anderson, 2017), creates a perplexing and nebulous problem: we don’t actually know who controls our flow of digital information. This creates an epistemic fracture, in the sense that the manner in which information is delivered to the user is unknown, and the accuracy of the information being delivered may or may not be true. Societies across the world are facing intense social and political polarization (Conway, 2020, p. 3), and the role that the algorithms play in reinforcing problematic beliefs is complicit in the creation of this fragmentation.

A quick glance at the creators and CEOs of a few of the major technology companies (Google [Alphabet] Facebook [Meta], Twitter [X] and Amazon) suggests a possibility that white males have dominated the industry to date, and it would be illogical to assume that the algorithms, written by white, western, colonial settlers, would be void of any human bias. Hobbs summed it up succinctly saying that “algorithms are created by people whose own biases may be embodied in the code they write” (2020, p. 524). This assumption demands attention, as the potential to continue the hegemonic control of information exists within the algorithms. Considering the colonial mindset upon which Canada and the United States were founded, asking questions about who is determining the content we consume digitally is imperative; our history is one of enslavement and White dominance as opposed to one of collaboration and equality, and this legacy may now play a silent, covert role in our digital society today. We need only look to our recent history to see that our history in print books served to perpetuate the domination of white culture, which King and Simmons sum up saying “in many traditional history textbooks, history moves through a paradigm that is historically important to the dominant White culture (2018, p. 110). It does not seem a leap in logic to assume that at least some of the algorithms underlying the digital technologies we use on a daily basis may be complicit, as textbooks have been, in focusing the attention of the user back onto a White gaze. Marin’s statement about Western assumptions that they “often tacitly work their way into research on human learning and development and the design of learning environments” (2020, p. 281) underscores not only the possibility, but indeed the likelihood that the oppression is ongoing today.

This suspicion of control is compounded by occasional changes that are actually visible. An example of this is Elon Musk arbitrarily changing the information flow on Twitter, including enforcing users to have a Twitter login to view tweets, then silently removing this limiting requirement, to instead limit the number of tweets a person would be permitted to read in a given day (Warzel, 2023). To compound the dubious nature of these changes, Musk is a “self-professed free-speech ‘absolutist’” (Warzel, 2023), a statement that serves not to alleviate, but rather to underscore reasons to be suspicious of his platform and its algorithm, as some of his statements that he has personally made ‘freely’, have revealed him to be duplicitous (Farrow, 2023). It is worthwhile, however to note that many users have taken a break from this platform, have left it entirely, or do not see themselves being active on that platform a year down the line (Dinesh & OdabaŞ, 2023) since Musk’s takeover, and ongoing rebranding and changing of the platform. Beyond the fact that when the majority of users signed up for Twitter, these restrictions (as well as the eased restrictions) were not what the users signed up for or agreed to; it should be noted that when Obar and Oeldorf-Hirsch updated the academic literature regarding people’s reading of the user agreements, the previous research was supported, and their summary revealed that “individuals often ignore privacy and TOS policies for social networking services” (2020, p. 142). So, although the user experience on Twitter has changed since Musk’s acquisition, it should not be suggested that users would not have agreed to these terms and conditions, as they would not likely have read these terms.

Credibility of information

A second major consideration as it pertains to algorithmic governance is the concept of credibility of the information we encounter online. We have already established that the flow of information is controlled, shaped, eased, and released algorithmically. These same algorithms are also responsible for the broad distribution of the barrage of disinformation and fake news in recent years. Misinformation is content that circulates online containing untrue information, but the intention behind it is, at least in some cases innocent, in that the person sharing it believed it to be real. Altay et al. defined misinformation as being “in its broadest sense, that is, as an umbrella term encompassing all forms of false or misleading information regardless of the intent behind it” (2023, p. 2). Fake news, on the other hand, has a more specific definition as it is deliberately untrue. Springboarding from the definition Rini (2017) proposed for fake news, Blake-Turner defined fake news as 

one that purports to describe events in the real world. Typically by mimicking the conventions of traditional media reportage, yet is [not justiciable believed by its creators to be significantly true], and is transmitted [by them] with the two goals of being widely re-transmitted and of deceiving at least some of its audience” (2020, p. 2). 

Politicians and leaders regularly engage in the creation and promotion of fake news in their campaigns, news releases, and press conferences in their quest to maintain their voting base, and whenever possible to increase it. This fake news is then shared and redistributed by followers of the political party responsible for the fake news, it is run through the algorithms that govern information, and is then delivered to the people who are most likely to believe it.

Lying is by no means a new skill in the world of politics. From the beginnings of democracy, impressing the voter in some capacity has been important to gaining or retaining power. “The importance of the political domain ensures that some parties have good pragmatic reason to fake such content – a point illustrated b y the long history of misleading claims and advertisements in politics” (Harris, 2022, p. 83). The newcomer in this is the ability of the common person to create content that appears to be true. In the past, news and information was communicated through television, newspapers, magazines, and books; all of which involved an editor who would carefully read through all manuscripts and determine their publication value. Today, anyone with a computer, some simple photo editing apps, and a commitment to an idea can create content that not only seems real, it is entirely believable. Our older generations have lived the majority of their lives in a time when published material had already been vetted, and to them, published materials were factual. Now they, along with the younger generations, are faced every day with realistic fakes that challenge everyone to question the truth of practically everything encountered online. Places that were once able to deliver accurate and factual knowledge are now deceptive, and at times are even difficult to fact check.

Deepfakes are a newcomer to the world of publishing that usher in an even deeper level of falsehoods, obscuring of facts, and incredibly inauthentic yet lifelike video footage. “The term ‘deepfake’ is most commonly used to refer to videos generated through deep learning processes that allow for an individual’s likeness to be superimposed onto a figure in an existing video” (Harris, 2022, p. 83). Our epistemic environment has already been compromised by the prevalence of untrue words typed on the screen, along with compellingly falsified photos and images, and now we are facing the corruption of that which was previously seen to be the “smoking gun” of truth; the video evidence. Harris also informed that at the time of his writing, a mere year ago, deepfakes remained relatively unconvincing; with the sudden advent of AI, deepfakes have already grown increasingly more realistic.

Misinformation and fake news create epistemic problems in modern society. Blake-Turner defined an epistemic environment as including “various things a member of the community is in a position to know, or at least rationally believe, about the environment itself” (2020, p. 10). The key words in the definition: rationally believe, underscore how the existence of fake news and deepfake technology create tensions between what is fact, and what is believed to be fact. At the time of this writing, the former American President has been indicted four times, in four different states, facing 91 felony (Baker, 2023) charges, almost all of which relate to lying, misinformation, and ultimately turning those lies into action to attempt to overturn the results of the 2020 federal election. In an illustration of the severity of the epistemic problem, those lies and his disinformation resulted in an attack on the U.S. Capitol on January 6, 2021, they have resulted in hundreds of people being sentenced to jail time for their participation in that riot, police have been beaten and killed, and the threats of violence and riots continue from this former president. Blake-Turner helps make sense of the manner in which these events came to pass: “the more fake news stories that are in circulation, the more alternatives are made salient and thereby relevant – alternatives that agents must be in a position to rule out (2020, p. 11). The onslaught of lies, misinformation and political propaganda have created an environment where many people have struggled to find the truth in the midst of all this chaos and deceit.

Surveillance of the people

Another crucial element of the algorithms that live in our midst is the subtle, yet ongoing surveillance of the people who use the technology. As users of networked technology, we should all be aware that surveillance could be occurring, but the extent to which it is really happening should be of concern. “While the problem of surveillance has often been equated with the loss of privacy, its effects are wider as it reflects a form of asymmetrical dominance where the party at the receiving end may not know that they are under surveillance” (Issar & Aneesh, 2021, p. 7). Foucault described surveillance through what he termed panopticism, describing an architectural arrangement whereby people were always being watched. Arguably, the pantopticon has been created virtually via the digital trails we create when we utilize networked technologies. In 2016 the British firm Cambridge Analytica reported having 4,000 data points on each voter in the United States; data which included some voluntarily given data, but also much subversive data, including data gathered from Facebook, loyalty cards, gym memberships and other traceable data (Brannelly, 2016). While these numbers are shocking, it is made worse by the fact that this data was sold, and then used by the Republican party to target and influence undecided voters to vote for Donald Trump in the 2016 election. This aligns with the statement made by Issar and Aneesh that “while the problem of surveillance has often been equated with the loss of privacy, its effects are wider as it reflects a form of asymmetrical dominance where the party at the receiving end may not know that they are under surveillance (2021, p. 7). The American voters were oblivious to the fact that their data was being collected in the manner that it was, that their personal data was being amassed and collected into one neat package, and that this package was being sold for the purposes of manipulating emotions to achieve the political goal of one party. Ironically, as his court dates approach, even the duplicitous former president of the United States could not escape surveillance; his movements, messages, conversations, and other interactions were also recorded, and though he continues to lie about his actions, and manipulate some public perception of some of his deeds, he has been unable to exert enough control to not, eventually, be exposed. The ongoing misinformation campaign, however, and the algorithmic governance will continue to provide his supporters with images, words, articles and ideas that uphold their damaged, and inaccurate beliefs.

In the model of surveillance, everyone is being watched, everyone is visible. Bucher stated that “surveillance thus signifies a state of permanent visibility” (2012, p. 1170), however, “concerns about the privacy impact of new technologies are nothing new” (Joinson et al., 2011 p. 33). Within networks, and social media there exists a privacy paradox whereby when individuals are asked about privacy, “individuals appear to value privacy, but when behaviors are examined, individual actions suggest that privacy is not a priority” (Norberg et al., 2007; Obar & Oeldorf-Hirsch, 2022, p. 142). After hastily clicking “accept” to the user agreement, we navigate through the internet, viewing personalized advertisements and “information” nuggets that align with our personal interests, and we grow increasingly oblivious to the fact that this “algorithmic personalization is part of what is termed surveillance capitalism, “the practice of translating human experience into data that can be used to make predictions about behavior” (Hobbs, 2020, p. 523). We are seen by someone, somewhere, every time we make a purchase, click like on a video or social media share, swipe our points card at a store, drive past someone’s ring camera, plug in our electrical vehicle to charge, and myriad other activities too numerous to mention. This surveillance contributes to the data points that are logged for every individual. 

Visibility of the people

The surveillance and visibility of all people through the algorithms that collect data should not be confused with people online being visible. Indeed, the algorithms behind many technologies serve to enforce and underscore the prejudiced paradigms often enacted in the face to face world. Huq reported that “police, courts, and parole boards across the country are turning to sophisticated algorithmic instruments to guide decisions about the where, whom, and when of law enforcement” (2019, p. 1045). This is a terrifying prospect for people of marginalized communities who have been historically targeted by the law. Alkhatib et al. summed up the findings of researchers saying “these decisions can have weighty consequences: they determine whether we’re excluded from social environments, they decide whether we should be paid for our work, they influence whether we’re sent to jail or released on bail” (2019, p. 1). The faceless anonymity afforded by the internet is not equally afforded, as the algorithms that follow us on our digital paths ensure that our life is logged and then is mathematically and computationally assessed and delivered back to us through the algorithmic governance. 

Geography has historically defined the physical location of a person on the globe, however in a globalized world involving networked interactions, the definition needs to extend to the places we visit online. Researchers have argued that “space is not simply a setting, but rather it plays an active role in the construction and organization of social life which is entangled with processes of knowledge and power” (Neely & Samura, 2011; Pham & Philip, 2021). A lens of critical geography is warranted as we consider the impact and implications of the algorithmic power we engage with daily. 

Although the concept of the digital divide has been a topic amongst educators since the term was first coined in the 1990s, it has by and large been limited to the concept of students having access to digital devices by which to access the information contained on the internet. The digital divide and critical geography must intersect when we examine online interactions to ascertain not only the status of the devices our students have access to, but also the subliminal reinforcers of racism, marginalization and ontological oppression embedded in the digital landscape. Gilbert argued that “‘digital divide’ research needs to be situated within a broader theory of inequality – specifically one that incorporates an analysis of place, scale, and power – in order to better understand the relations of digital and urban inequalities in the United States” (2010, p. 1001)., a statement easily extended to include Canada. The digital divide must also include the racialized experience of minorities and people of colour; insomuch as people of colour encounter advertisements online that differ from those being shown to white people, they also experience challenges such as the errors frequently made by facial-recognition systems as it has been noted that these systems “make mistakes with Black faces at far higher rates than they do with white ones” (Issar & Aneesh, 2021, p. 8). As a continent with a history of antiblackness and racism, we must be aware of “the micro and macro instances of prejudices, stereotyping, and discrimination in society directed toward persons of African descent – stems largely from how historical narratives present Black people” (King & Simmons, 2018, p. 109), not only because we have a past that facilitated racism, but because this racism is ongoing.

As an illustration of the power of the algorithm, we can look to the recent news coming out of the state of Florida. Under the current governor, Ron Desantis, the same governor who enacted the “Stop Woke Act”, and the “don’t say gay” restriction, the Black history curriculum has recently been changed to include standards that promote the racist idea that in some way slavery benefited Black people, and any discussion about the Black Lives Matter movement has been silenced in Florida schools (Burga, 2023). Upon learning of this unimaginable educational situation in Florida, I conducted a search on YouTube to try to learn more, and this search for information served to underscore Issar and Aneesh’s assertion that “one of the difficulties with algorithmic systems is that they can simultaneously be socially neutral and socially significant” (Issar & Aneesh, 2021, p. 7). My search was socially neutral when I was merely seeking more information about a current event in the state of Florida. It changed to become socially significant in the days following this quest for knowledge. What transpired after this search was a semi-bombardment of what I would categorize as racial propaganda within my device; not restricted only to my YouTube application. One brief search to learn more about a shocking topic has led to the algorithm seeking not only content that informs me as to what is occurring in Florida politics, but also providing me suggestions for content that supports what is occurring in Florida; content that I do not want to have brought to my attention repeatedly. Over time, repeated exposure to problematic or blatantly false information lends the user to begin to think that there are lots of people who believe this, and there is strength in numbers. If many people believe something to be true, it must then be true.

This is problematic in obvious ways, but there are also potential subtle ways that the algorithm continues to exert its power. Imagine that a teacher teaching a particular concept conducts a search to support the lesson. If the teacher has searched, for instance, something that is questionable in its factuality, something that contains racist tropes or other examples of symbolic violence, the content that this teacher will continue to be exposed to after the search will reinforce the existence of that biased and potentially harmful perspective. Further, as the teacher shares her screen before the class during instruction, there is a distinct likelihood that students will see the results of this search appearing potentially in advertisements, recommended videos in YouTube, as well as in the results of this teacher’s Google Searches. Beyond the potential for professional discomfort resulting from algorithmically suggested content, lies the epistemic problem that this content is being recycled and presented as true, realistic, informative, valuable content. In this we see what Beer warned: “power is realised in the outcomes of algorithmic processes.” (2017, p. 7). While this might produce an opportunity to teach students about algorithms and the subversive power they possess, across society algorithmic awareness is only an emergent conversation for the majority of people, implying that the teacher may not possess the language or skillset to explain the unsolicited content that is displayed on the screen during instructional time. 

This is not to suggest there is no hope, and that our classrooms will be victims of algorithmic governance in the long-term. “We are now seeing a growing interest in treating algorithms as object of study” (Beer, 2017, p. 3), and with this interest will come new information for understanding, and combatting the reality of algorithmic presence. Hobbs argued that “We should know how algorithmic personalization affects preservice and practicing teachers as they search for and find online information resources for teaching and learning” (2020, p. 525). I would extend that statement to include all teachers, preservice and experienced, as algorithmic governance impacts everyone.

Conclusion

The power held by the opaque algorithms that control the flow, and the visibility of digital information presents what Rittel and Webber (1973) would call a wicked problem. Wicked problems lack the clarifying traits of simpler problems with the term wicked meaning malignant, vicious, tricky, and aggressive (p. 160). Existing with the secret phantom, the algorithm that shapes and changes our access to information is, indeed, a wicked problem. Hobbs stated that ”given the many different ways that algorithmic personalization affects peoples’ lives online, it will be important to advance theoretical concepts and develop pedagogies that deepen our understanding of algorithmic personalization’s potential impact on learning” (2020, p. 525). 

Further algorithmic challenges await in the near future as we move toward a future infused with ubiquitous AI. Algorithms have brought a new type of manipulation into the digitally connected world, with the potential to further increase the polarization already being experienced in our modern society. Artificial intelligence presents a new wicked problem for education as we consider its impact on assessment, plagiarism, contract cheating and myriad other relevant topics that will reveal themselves as this new technological revolution unfolds. Educational researchers will need to continue to interrogate and explore the powers behind the algorithms that impact all digital users worldwide to advance accurate, equal, ethically responsible dissemination of information.

 

References

Alkhatib, A., & Bernstein, M. (2019). Street–level algorithms: A theory at the gaps between policy and decisions. Conference on Human Factors in Computing Systems – Proceedings. https://doi.org/10.1145/3290605.3300760

Altay, S., Berriche, M., & Acerbi, A. (2023). Misinformation on Misinformation: Conceptual and Methodological Challenges. Social Media and Society, 9(1). https://doi.org/10.1177/20563051221150412

Aneesh, A.  (2006). Virtual migration: The programming of globalization. Duke University Press.

Baker, P. (2023, August) Trump indictment, part iv: A spectacle that has become surreally routine. The New York Times. https://www.nytimes.com/2023/08/14/us/politics/trump-indictments-georgia-criminal-charges.html 

Beer, D. (2017). The social power of algorithms. In Information Communication and Society (Vol. 20, Issue 1, pp. 1–13). Routledge. https://doi.org/10.1080/1369118X.2016.1216147

Blake-Turner, C. (2020). Fake news, relevant alternatives, and the degradation of our epistemic environment. Inquiry (Oslo), ahead-of-print(ahead-of-print), 1–21. https://doi.org/10.1080/0020174X.2020.1725623

Branelly, K. (2016). Trump campaign pays millions to overseas big data firm. NBC News. https://www.nbcnews.com/storyline/2016-election-day/trump-campaign-pays-millions-overseas-big-data-firm-n677321 

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. https://doi-org.ezproxy.lib.ucalgary.ca/10.1177/1461444812440159

Burga, S. (2023 July). Florida approves controversial guidelines for black history curriculum. Here’s what to know. Time. https://time.com/6296413/florida-board-of-education-black-history/ 

Connolly, R. (2023). Datafication, Platformization, Algorithmic Governance, and Digital Sovereignty: Four Concepts You Should Teach. ACM Inroads, 14(1), 40–48. https://doi.org/10.1145/3583087

Conway, K. (2020). The art of communication in a polarized world. AU Press.

Dinesh, S. & OdabaŞ, M. (2023, July). 8 facts about americans and twitter as it rebrands to X. Pew Research. https://www.pewresearch.org/short-reads/2023/07/26/8-facts-about-americans-and-twitter-as-it-rebrands-to-x/ 

Farrow, R. (2023 August). Elon Musk’s shadow rule. The New Yorker. https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule 

Foucault M (1977) Discipline and Punish: The Birth of the Prison. London: Allen Lane

Gilbert, M. (2010). Theorizing digital and urban inequalities: Critical geographies of “race”, gender and technological capital. Information, Communication & Society, 13(7), 1000–1018. https://doi.org/10.1080/1369118X.2010.499954

Harris, K. R. (2022). Real Fakes: The Epistemology of Online Misinformation. Philosophy & Technology, 35(3), 83–83. https://doi.org/10.1007/s13347-022-00581-9

Hobbs, R. (2020). Propaganda in an Age of Algorithmic Personalization: Expanding Literacy Research and Practice. Reading Research Quarterly, 55(3), 521–533. https://doi.org/10.1002/rrq.301

Huq, A. Z. (2019). Racial equity in algorithmic criminal justice. Duke Law Journal, 68(6), 1043–1134.

Issar, S., & Aneesh, A. (2022). What is algorithmic governance? Sociology Compass, 16(1). https://doi.org/10.1111/soc4.12955

Joinson, A., Houghton, D., Vasalou, A. Marder, B. (2011). Digital crowding: Privacy, self-disclosure, and technology. In Trepte, S., & Reinecke, L. (Eds.), Privacy Online (pp. 33-45). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-21521-6

King, L. J., & Simmons, C. (2018). 4 Narratives of Black History in Textbooks: Canada and the United States. S. A.

Neely, B., & Samura, M. (2011). Social geographies of race: connecting race and space. Ethnic and Racial Studies, 34(11), 1933–1952. https://doi.org/10.1080/01419870.2011.559262

Norberg, P., Horne, D. R., & Horne, D. A. (2007). Privacy Paradox: Personal Information Disclosure Intentions versus Behaviors. The Journal of Consumer Affairs, 41(1), 100–126. https://doi.org/10.1111/j.1745-6606.2006.00070.x

Obar, J. A., & Oeldorf-Hirsch, A. (2020). The biggest lie on the Internet: ignoring the privacy policies and terms of service policies of social networking services. Information Communication and Society, 23(1), 128–147. https://doi.org/10.1080/1369118X.2018.1486870

Pasquale, F. (2015). The black box society : the secret algorithms that control money and information. Harvard University Press.

Pham, J., & Philip, T. (2021). Shifting education reform towards anti-racist and intersectional visions of justice: A study of pedagogies of organizing by a teacher of Color. Journal of the Learning Sciences, 30(1), 27–51. https://doi.org/10.1080/10508406.2020.1768098

Philip, T., & Sengupta, P. (2021). Theories of learning as theories of society: A contrapuntal approach to expanding disciplinary authenticity in computing. Journal of the Learning Sciences, 30(2), 330–349. https://doi.org/10.1080/10508406.2020.1828089

Rainie, L. & Anderson, J. (2017, May). The future of jobs and jobs training. Pew Research. https://www.pewresearch.org/internet/2017/05/03/the-future-of-jobs-and-jobs-training/

Rini, R. (2017). Fake news and partisan epistemology. Kennedy Institute of Ethics Journal, 27(2), E–43–E–64. https://doi.org/10.1353/ken.2017.0025

Rittel, H. W. J., & Webber, M. M. (1972). Dilemmas in a general theory of planning. Institute of Urban and Regional Development, University of California.

Shalf, J. & Leland, R. (2015). Computing beyond Moore’s Law. Computer, 48(12), 14-23. doi: 10.1109/MC.2015.374.

Translated by Content Engine, L. L. C. (2023, Feb 08). Is it the end of Moore’s Law? Artificial intelligence like ChatGPT challenges the limits of physics. CE Noticias Financieras https://ezproxy.lib.ucalgary.ca/login?qurl=https%3A%2F%2Fwww.proquest.com%2Fwire-feeds%2Fis-end-moores-law-artificial-intelligence-like%2Fdocview%2F2774910208%2Fse-2%3Faccountid%3D9838

Warzel, C. (2023, July). Elon Musk Really Broke Twitter This Time. The Atlantic. https://www.theatlantic.com/technology/archive/2023/07/twitter-outage-elon-musk-user-restrictions/674609/ 

 

I got my Ethics Approval!

I got the green light today! 

The ethics process is not an interesting one to blog about, but it is a crucial step in the research process. The questions in the ethics application delve deeply into the rationale for conducting the research, but more importantly, the impact that the research may have upon participants. The application was completed by me, with my supervisor as the Principal Investigator. She assisted me in ensuring that the appplication was thoroughly completed.

The application is then reviewed by the Institutional Research Information Services Solution (IRISS) and they respond with items that need clarification and/or attention. After a couple of back and forth online conversations regarding the needed revisions, my application was approved.

I then had to file the approved paperwork with the school district I will be working with for my research as they require the paperwork 30 days in advance of the commencement of my research. I have submitted that already, as I am hoping to deploy my survey on August 20, as there is a looming threat of a teacher strike occurring early this fall. If I am going to have to be on strike, I’d like to be conducting the data analysis while that happens!

I am now a Doctoral Candidate!

I passed it!!

I passed my candidacy exam this morning! The above images reveal my nervousness in the moments leading up to the Zoom exam, and in the moments at the end. Let me explain.

The photo of the papers are my specific research questions as they are worded in my proposal, and the propositions that I have put forth as part of my case study methodology. I anticipated that I might freeze and then panic trying to recall exactly how I worded them in the final proposal, and words matter. The last thing I wanted to do was to misquote myself with respect to where the final wording landed for the questions and end up babbling!!

The photo on the right is of the esteemed faculty who served as my examination committee. I forgot to ask permission to post a photo to blog about my experience, so I have blurred all individuals as they were not offered an opportunity to decline.

What is a Doctoral Candidacy Exam like?

I can only speak to my personal experience, but if you are curious, this is how it played out:

In advance of the exam, I met with my Candidacy; a group that comprises my incredible supervisor, and two other faculty members who are experts in the field of studies where my specific research has landed. We selected two other faculty members (both were from UCalgary as well, and when I defend, there will need to be a member from another institution, but for candidacy, the examinors can all be from UCalgary) and my proposal was provided to them several weeks prior to the exam.

A seventh professor particpates in the examination as the “neutral chair”; and their job is to ensure that times are adhered to, and that protocols are followed. As I understand it, this allows the other professors to focus on the examination as someone else is watching the clock.

To start the exam, I was given the first fifteen minutes to give a presentation to the group about my research and my proposal. Upon completion of my presentation, each examiner, beginning with the professor who is “farthest from my research” asked me questions about my research. I then had ten minutes in which to respond to the questions. I was allowed to take my time in considering my responses, and if I wished to consult my paperwork, notes, etc. that was allowable. But ten minutes to respond is actually a fairly truncated period of time, so it was important to be well-versed and confident in my research intentions. Then the second examiner asked a question and again, I had ten minutes to respond. The questions then moved to the members of my Candidacy Committee, each had the same opportunity to pose questions about my research, and again, I had ten minutes to respond to each. The last to question me was my Supervisor.

We then took a 5 minute break.

And then we repeated the above process.

At the end of the second round of questioning, I logged out of Zoom entirely to allow the examiners to discuss the status of my candidacy. 

While they were only discussing for a matter of minutes, not hours, it felt much longer than it was.

But with a unanimous decision, they declared that I had passed the exam, and I am now a doctoral candidate, and I can proceed with completing my ethics application to the university to earn the green light to conduct my research!

Take the Challenge! Make this the Best Year Ever!

Download our free planner here!!

A great school year is built on great relationships…. for both teachers and students. The best learning occurs in classrooms where relationships are prioritized. 

Our free planner provides you an EASY strategy to take control of those relationships in a deliberate, equitable, targeted manner where all student strengths will be celebrated.

Developed from the research literature on the Teacher-Student relationship, this planner lays out a strategic approach for the coming school year to easily build great relationships with every student, and their families. 

Citations for the references contained in the planner are listed at the bottom of this page.

References

Ainsworth, M. D. S., Blehar, M. C., Waters, E., & Wall, S. (2015). Patterns of attachment: A psychological study of the strange situation. Routledge. (Original work published in 1979).

Ang, R. (2005). Development and Validation of the Teacher-Student Relationship Inventory Using Exploratory and Confirmatory Factor Analysis. The Journal of Experimental Education, 74(1), 55–74. https://doi.org/10.3200/JEXE.74.1.55-74

Ang, R. P., Ong, S. L., & Li, X. (2020). Student Version of the Teacher–Student Relationship Inventory (S-TSRI): Development, Validation and Invariance. Frontiers in Psychology, 11, 1724. https://doi.org/10.3389/fpsyg.2020.01724

Aultman, L. P., Williams-Johnson, M. R., & Schutz, P. A. (2009). Boundary dilemmas in teacher–student relationships: Struggling with “the line.” Teaching and Teacher Education, 25(5), 636–646. https://doi.org/10.1016/j.tate.2008.10.002

Birch, S. H., & Ladd, G. W. (1996). Interpersonal relationships in the school environment and children’s early school adjustment: The role of teachers and peers. In J. Juvonen & K. Wentzel (Eds.), Social motivation: Understanding children’s school adjustment. New York: Cambridge University Press.

Corbin, C. M., Alamos, P., Lowenstein, A. E., Downer, J. T., & Brown, J. L. (2019). The role of teacher-student relationships in predicting teachers’ personal accomplishment and emotional exhaustion. Journal of School Psychology, 77, 1–12. https://doi.org/10.1016/j.jsp.2019.10.001

Hamre, B. K., & Pianta, R. C. (2001). Early Teacher-Child Relationships and the Trajectory of Children’s School Outcomes through Eighth Grade. Child Development, 72(2), 625–638. https://doi.org/

10.1111/1467-8624.00301

Hattie, J., & Yates, G. (2013). Visible learning and the science of how we learn. Routledge. https://doi-org.ezproxy.lib.ucalgary.ca/10.4324/9781315885025

Peter, F., & Dalbert, C. (2010). Do my teachers treat me justly? Implications of students’ justice experience for class climate experience. Contemporary Educational Psychology, 35(4), 297–305. https://doi.org/10.1016/j.cedpsych.2010.06.001

Quin, D. (2017). Longitudinal and contextual associations between teacher–student relationships and student engagement: A systematic review. Review of Educational Research, 87(2), 345–387. https://doi.org/10.3102/0034654316669434

Stuhlman, M. W., & Pianta, R. C. (2002). Teachers’ narratives about their relationships with children: Associations with behavior in classrooms. School Psychology Review, 31(2), 148–163. https://doi.org/10.1080/02796015.2002.12086148

Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. V. MCole, S. John-Steiner, S. Scribner & E. Souberman (Eds.). Cambridge, MA: Harvard University Press.

Wentzel, K. R. (1997). Student motivation in middle school: The role of perceived pedagogical caring. Journal of Educational Psychology 89(3), 411-419.

Masterclass in Graduate Studies Organization

Completing a graduate degree while working full-time, having a family, and wanting to still have some personal time requires planning and deliberate strategies. As a specialist in education and educational technology, I have developed a simple, but layered plan through which to complete my doctoral degree with minimal stress. 

In the video below, I outline for you how to set yourself up to enjoy your degree, experience success, and feel in control of the process every step of the way.

Through the use of an iPad equipped with the app Goodnotes, and a computer with Zotero and Google slides, I have limited my paper consumption significantly, and have streamlined my research process.

What is the “Turing Test?”

At a conference at Dartmouth in the 1950s, Alan Turing; the mathematician and computer scientist who had played a crucial role in cracking the Enigma Code in the second world war was engaged in conversations with other intellectuals about machines, computation, and future technologies. 

Originally called “The Imitation Game” (a movie of this name was released in 2014), the Turing Test as we now know it, was proposed to answer the question “can machines think like humans?” To this end, a human would be situated apart from both another human and a machine. Both the computer (machine) and the human would respond to the queries of the human subject. When the subject can not discern if the response came from a human or a machine, the test is said to have been passed.

When ChatGPT was released on November 30, 2022, many feel that at that moment, the Turing Test was officially passed; and this change has impacted many aspects of the global society already. Time can be saved through the use of ChatGPT, written content can be improved, tedious writing tasks can be assisted, and human written output can be bolstered. Of course, there are challebges as well; teachers in particular face some challenge at this time in discerning if a student has authentically written the work they are submitting for grading.

These topics are all covered in other blog posts, and so today’s topic answers the question “What is the Turing Test?”

Chat GPT #QuickWins for Teachers

Let’s talk about some ideas for teachers to start using ChatGPT to save time. Teachers are busy people, and sometimes it feels like there’s always one more thing being added to the “to-do” list that teachers are expected to undertake. Wouldn’t it be nice to have some help? Wouldn’t it be amazing to have an assistant who could come up with new ideas and ways to refresh your projects, assessments, newsletters, report card comments, and other clerical duties? Well, please let me introduce you to ChatGPT. Your new idea-generating, text-writing virtual assistant. Here are some ideas to test out in ChatGPT. Enter one of these prompts, and watch how FAST it comes up with ideas for you. This is a game-changer.

Eduaide – AI Tool Review

In the ever-evolving landscape of education, the integration of artificial intelligence (AI) has emerged as a transformative force, reshaping the way we approach teaching and learning. AI, with its ability to process vast amounts of data, adapt to individual learning styles, and facilitate personalized experiences, has transcended the conventional boundaries of education. As we navigate the 21st century, AI is not merely a technological novelty; it is a dynamic catalyst propelling education into new frontiers. From intelligent tutoring systems that offer tailored support to students, to chatbots fostering interactive and responsive learning environments, AI is revolutionizing the very essence of education. It is not just a tool; it’s a pedagogical ally, amplifying the capacities of educators and unleashing unparalleled possibilities in the realm of teaching. This dynamic fusion of artificial intelligence and education promises not only efficiency but also a redefinition of what it means to engage, inspire, and empower the learners of tomorrow.

Eduaide is one of the frontrunners in AI technology for teachers. With just a few clicks, the AI will assist teachers in creating strong, editable, content that aligns to the curricular outcomes inputted by the teacher using it to create!

The image below is a worksheet (or quiz, or test) that Eduaide generated in a matter of seconds.  The only instructions I provided in the “topic” field was “Algebraic Expressions”. Eduaide did not automatically provide an answer key, but when I clicked on the rocket in the top right corner (above the math questions), generating an answer key was an option I could choose.

The image below generated the escape room about water conservation in a matter of seconds. The results include the needed materials for this escape room, as well as instructions regarding the setup. What it did not include was the questions, riddles, or puzzles that the students must solve in order to complete the escape room.

Eduaide saves the content you create, but the “edit” button is not intuitive. On the “saved content” screen, there will be a list of the resources created in Eduaide. On that screen, the “Preview” button is really obvious. If you click to the right of the preview button (on the kebab menu… the three dots), the first choice says “Load in Workspace” – That’s where you go to edit it.

Class Companion – AI Tool Review

Today I took some time to have a peek at a formative assessment tool for students’ written work. I used a portion of my own literature review for my dissertation to ascertain what the tool is capable of. I was actually fairly impressed.

Teacher Dashboard

The teacher dashboard is easy to navigate. Like most learning management systems, you create classes in the dashboard and you push assignments out to students. The AI in the dashboard can assist you with creating a writing assignment, and once you have your assignment inputted, you then determine which classes you are assigning it to.

Student View

The student side was also very familiar in its appearance. Students see assignments they are expected to complete, and the layout is logical. Students also have a button to dispute their grade, which will send a message to the teacher regarding this. Student-self-assessment is always a goal in education, and used intentionally, the dispute opportunity for students can compel some self-reflection of their work.

It is in the writing of the student that the magic happens.

Assessing the Tool

For my assessment of this tool, I created a class and I assigned an essay on this history of AI. I then added a fake student to my class, and assigned the essay to this fake student. Then I grabbed a Chromebook and through the email from Class Companion, I was able to join this fake class.

I opened the assignment and was greeted with a space in which to complete my essay. I opened my literature review chapter and copy/pasted my first three paragraphs into the tool. It took some time for it to assess my work, (it wrote some entertaining phrases on the screen while I waited) and I must mention that I did not provide the entire essay to the tool, so some parts of the formative assessment are weak due to the tool not having all of my writing to assess (no conclusion, namely!)

Overall

Additional testing revealed that it will adjust its scoring if the teacher changes the rubric, and the overall formative assessment was accurate, as tested by teachers.

There is an interesting reflective opportunity for teachers here; an opportunity to compare their own grading against that of the AI tool. That’s not to say that the tool is correct and the teacher is wrong; not by any stretch. It’s just an opportunity for teachers to consider their rubrics and their own tendencies when grading is occurring.

As the teacher, I can override grades given by the tool, allowing me to have the final say as to a student’s performance on the written task.

Overall, I was impressed with this tool.

 

Converging Technologies that Shaped the AI Landscape

When OpenAI released ChatGPT on November 30, 2022, it felt as though AI had suddenly “arrived”. Despite this feeling that it appeared so suddenly, there have been, unsurprisingly, decades of research and technological advancement and development that led to this disruptive piece of technology.

The image represents 10 “high tech” concepts, all of which have been mentioned in the literature and empirical articles I’ve been reading. The dotted lines illustrate which concepts were connected to other concepts within the literature. All these technologies have played a role in bringing us to the place we are currently at with artificial intelligence.

10 Artificial Intelligence Uses in K-12 Education

In the dynamic landscape of K-12 education, the integration of Artificial Intelligence (AI) has revolutionized the way students learn and educators teach. From personalized learning experiences to intelligent tutoring systems, AI applications in education have opened up a realm of possibilities to enhance student engagement, comprehension, and overall academic performance. This list encompasses some of the growing areas where artificial intelligence is being found in education. Though not yet necessarily commonplace, the literature reveals that many of these uses are coming soon to a school near you!

1. Personalized Learning: AI-powered educational software can adapt to students’ individual learning styles and paces, providing personalized learning experiences tailored to their specific needs (Chan & Hu, 2023; Crompton & Burke, 2022; Fuchs, 2023; Garcia-Martinez, 2023; Gupta & Chen, 2022; Hwang & Tu, 2023).

2. Intelligent Tutoring Systems: AI-driven tutoring systems can provide students with real-time feedback, additional practice opportunities, and customized learning paths to enhance their understanding of various subjects (Crompton et al., 2022; Crompton & Burke, 2023; Hwang & Tu, 2023; Zawacki-Richter et al., 2019).

3. Adaptive Assessments: AI-based assessment tools can analyze students’ performance data and provide educators with insights into their strengths and weaknesses, facilitating targeted interventions and support strategies.

4. Virtual Reality (VR) and Augmented Reality (AR) Learning: AI can be used to create immersive and interactive virtual learning environments, allowing students to explore complex concepts through realistic simulations and visualizations.

5. Language Learning Support: AI-powered language learning platforms can assist students in developing their language skills by providing interactive lessons, pronunciation guidance, and language practice exercises.

6. Automated Grading Systems: AI-based grading systems can automate the process of grading assignments and assessments, enabling educators to save time and focus on providing more targeted feedback and support to students.

7. Educational Content Creation: AI tools can assist educators in creating engaging and interactive educational content, including lesson plans, quizzes, and educational games, to enhance students’ learning experiences.

8. Data-Driven Decision Making: AI analytics tools can analyze large datasets to identify trends and patterns in student performance, enabling educators to make data-driven decisions to improve teaching methodologies and student outcomes.

9. Intelligent Content Filtering: AI algorithms can help filter and curate educational content, ensuring that students have access to appropriate and relevant learning materials while maintaining a safe and secure online learning environment.

10. Interactive Chatbots for Learning Support: AI-powered chatbots can provide students with instant access to information, answer their questions, and offer learning guidance, fostering a supportive and engaging learning environment both inside and outside the classroom (Chen et al. 2023; Fuchs, 2022; Fuchs, 2023; Gupta & Chen, 2022; Liang et al., 2021; Sweeney, 2023; Tlili et al., 2023; Yu, 2023).

References

Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. https://doi.org/10.1186/s41239-023-00411-8

Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2023). Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success. Information Systems Frontiers, 25(1), 161–182. https://doi.org/10.1007/s10796-022-10291-4

Crompton, H., Jones, M. V., & Burke, D. (2022). Affordances and challenges of artificial intelligence in K-12 education: A systematic review. Journal of Research on Technology in Education, 1–21. https://doi.org/10.1080/15391523.2022.2121344

Crompton, H., & Burke, D. (2022). Artificial intelligence in K-12 education. SN Social Sciences, 2(7), 113. https://doi.org/10.1007/s43545-022-00425-5

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. https://doi.org/10.1186/s41239-023-00392-8

Fuchs, K. (2022). The importance of competency development in higher education: Letting go of rote learning. Frontiers in Education, 7, 1004876. https://doi.org/10.3389/feduc.2022.1004876

García-Martínez, I., Fernández-Batanero, J. M., Fernández-Cerero, J., & León, S. P. (2023). Analysing the Impact of Artificial Intelligence and Computational Sciences on Student Performance: Systematic Review and Meta-analysis. Journal of New Approaches in Educational Research, 12(1), 171. https://doi.org/10.7821/naer.2023.1.1240

Gupta, S., & Chen, Y. (2022). Supporting Inclusive Learning Using Chatbots? A Chatbot- Led Interview Study.

Hwang, G.-J., & Tu, Y.-F. (2021). Roles and Research Trends of Artificial Intelligence in Mathematics Education: A Bibliometric Mapping Analysis and Systematic Review. Mathematics, 9(6), 584. https://doi.org/10.3390/math9060584

Liang, J.-C., Hwang, G.-J., Chen, M.-R. A., & Darmawansah, D. (2023). Roles and research foci of artificial intelligence in language education: An integrated bibliographic analysis and systematic review approach. Interactive Learning Environments, 31(7), 4270–4296. https://doi.org/10.1080/10494820.2021.1958348

Sweeney, S. (2023). Who wrote this? Essay mills and assessment – Considerations regarding contract cheating and AI in higher education. The International Journal of Management Education, 21(2), 100818. https://doi.org/10.1016/j.ijme.2023.100818

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 1–24. https://doi.org/10.1186/s40561-023-00237-x

Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712. https://doi.org/10.3389/fpsyg.2023.1181712

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0

Key Concepts and Terminology in AI

Technojargon can be overwelming, not just for teachers, but for everyone. The world of artificial intelligence is no exception; it is a field filled with technical terms and definitions. Not all of the lingo is important for educators, but having a cursory understanding of the basics is probably worth the undertaking. Here we are going to take a look at some of the key terms and definitions pertinent to AI, in order to achieve a slightly deeper understanding of where we are at in 2023.

AIEd: short for Artificial Intelligence in Education, refers to the integration of artificial intelligence technologies into educational practices. It involves the use of AI-driven tools, algorithms, and systems to enhance and personalize the learning experience for students, streamline administrative tasks for educators, and provide data-driven insights for educational institutions. The application of AI in education (AIEd) has been the subject of research for about 30 years (Zawacki-Richter, 2019, p. 2).

Aigiarism: related to the concept of plagiarism, which refers to the practice of using someone else’s work or ideas without proper attribution or permission, aigiarism refers to using content entirely generated by artificial intelligence without acknowledging or attributing. In reading the literature, I have seen authors list ChgatGPT as a co-author, and I have read articles where the author opted not to formally name ChatGPT as a coauthor, with an explanation as to this choice.

Algorithm: An algorithm is a set of step-by-step instructions or rules designed to perform a specific task or solve a particular problem. In the context of computer science and AI, algorithms serve as the foundation for various operations, including data processing, machine learning, and decision-making processes.

Big Data: Big Data refers to large and complex data sets that are challenging to process using traditional data processing applications. It encompasses vast volumes of structured and unstructured data that require advanced analytics and processing techniques to extract valuable insights, patterns, and trends.

ChatBot: an AI-powered computer program designed to simulate human conversation and interact with users via text or speech. The term “Chatbot” stems from “Chatter bot” coined by Michael Loren Mauldin for programs capable of text-based conversations with users (Chen et al. 2023, p. 162). Chatbots utilize natural language processing and machine learning algorithms to understand user queries, provide relevant information, and engage in meaningful conversations.

ChatGPT: ChatGPT is a Natural Language Processing (NLP) model developed by OpenAI that uses a large dataset to generate text responses to student queries, feedback, and prompts (Fuchs, 2023, p. 1). Tlili (2023) noted that ChatGPT is a conversational artificial intelligence interface which interacts in a realistic way and even answers “follow up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests” (Open AI, 2023)

Deep Learning: a subset of machine learning that utilizes artificial neural networks to process and analyze complex data. It involves the use of multiple layers of algorithms to extract high-level features from raw data, enabling machines to perform tasks such as image recognition, natural language understanding, and decision-making. The ability of computers to simulate what the brain does is called deep learning (Maboloc, 2023, p. 1)

Generative Artificial Intelligence: Generative Artificial Intelligence refers to AI systems capable of creating original content. GenAI models use advanced algorithms to learn patterns and generate new content such as text, images, sounds, videos and code  (Chan & Hu, 2023, p. 1). These systems use advanced algorithms, often based on deep learning models, to generate new data based on patterns and examples from existing datasets and they closely resemble human-generated content.

GPT: GPT stands for Generative Pre-trained Transformer, which is a type of deep learning model known for its ability to generate human-like text based on given prompts. GPT is a language model developed by OpenAI that is capable of producing response text that is nearly indistinguishable from natural human language (Lund & Wang,  2023, p. 26). GPT models are based on transformer architectures and have been widely used for various natural language processing tasks, including text generation, translation, and summarization.

Large Language Models: Large Language Models refer to advanced AI models designed to process and understand human language on a large scale. A language model is a type of AI model trained to generate text that is similar to human language (Lund & Wang,  2023, p. 26).These models utilize complex algorithms and extensive datasets to perform tasks such as text generation, language translation, and sentiment analysis.

Machine Learning: a branch of AI that focuses on developing algorithms and systems capable of learning from data and making predictions or decisions based on that data. Popenici and Kerr (2017) define machine learning “As a subfield of artificial intelligence that includes software able to recognise patterns, make predictions, and apply newly discovered patterns to situations that were not included or covered by their initial design” It involves the use of statistical techniques and iterative learning processes to enable machines to improve their performance over time.

Multimodal Models: Multimodal Models are AI systems that can process and interpret multiple types of data, such as text, images, and audio, simultaneously. These models integrate information from various modalities to gain a comprehensive understanding of the data and enable more sophisticated analysis and decision-making. Multimodal models (GPT-4) may produce voice and video explanations and tag images (Rahaman et al., 2023, p. 2).

Natural Language Processing Models: NLP Models are AI systems specifically designed to understand, interpret, and generate human language. These models use algorithms and linguistic rules to process and analyze text or speech data, enabling tasks such as language translation, sentiment analysis, and text summarization. Natural Language Processing (NLP) models have been in development since the 1950s (Jones, 1994), but it was not until the past decade that they gained significant attention and advancement, particularly with the development of deep learning techniques and large datasets (Fuchs, 2023, p. 1).

Neural Systems: Neural Systems refer to computational models inspired by the structure and functioning of the human brain’s neural networks. Neural systems mimic the human brain (Maboloc, 2023, p. 1). In the context of AI, neural systems are utilized for tasks such as pattern recognition, decision-making, and learning from data, often implemented through artificial neural networks.

Training Data: Training Data refers to the dataset used to train machine learning models and AI systems. It consists of labeled or unlabeled examples that enable algorithms to learn patterns, make predictions, and improve their performance on specific tasks. If the training data is not adequately diverse or is of low quality, the system might learn incorrect or incomplete patterns, leading to inaccurate responses  (Fuchs, 2023, p. 2).

Turing Test: The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. Proposed by Alan Turing in 1950 who described the existence of intelligent reasoning and thinking that could go into intelligent machines (Crompton & Burke, 2023, p. 2), the test involves a human evaluator engaging in a natural language conversation with both a machine and another human without knowing which is which. If the evaluator cannot reliably distinguish between the machine and the human, the machine is considered to have passed the Turing Test. The Turing Test was proposed as a code of protocol to understand whether a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human (Tlili et al., 2023, p. 2).

References

Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. https://doi.org/10.1186/s41239-023-00411-8

Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2023). Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success. Information Systems Frontiers, 25(1), 161–182. https://doi.org/10.1007/s10796-022-10291-4

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. https://doi.org/10.1186/s41239-023-00392-8

Fuchs, K. (2022). The importance of competency development in higher education: Letting go of rote learning. Frontiers in Education, 7, 1004876. https://doi.org/10.3389/feduc.2022.1004876

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26–29. https://doi.org/10.1108/LHTN-01-2023-0009

Maboloc, C. R. (2023). Chat GPT: The need for an ethical framework to regulate its use in education. Journal of Public Health, fdad125. https://doi.org/10.1093/pubmed/fdad125

OpenAI. (2023). ChatGPT: Optimizing language models for dialogue. https://openai.com/blog/chatgpt.

Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 22. https://doi.org/10.1186/s41039-017-0062-8

Rahaman, Md. S., Ahsan, M. M. T., Anjum, N., Terano, H. J. R., & Rahman, Md. M. (2023). From ChatGPT-3 to GPT-4: A Significant Advancement in AI-Driven NLP Tools. Journal of Engineering and Emerging Technologies, 1(1), 50–60. https://doi.org/10.52631/jeet.v1i1.188

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 1–24. https://doi.org/10.1186/s40561-023-00237-x

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0

A Brief History of AI

The roots of AI can be traced back to the mid-20th century when researchers began exploring the possibility of creating machines that could simulate human intelligence. From the early days of simple problem-solving algorithms to the development of complex neural networks and deep learning models, AI has made significant strides in its evolution. It has transitioned from rule-based systems to data-driven approaches, unlocking capabilities such as natural language processing, computer vision, and autonomous decision-making. Over the years, AI has transformed from a theoretical concept to a practical reality, permeating various aspects of our daily lives and demonstrating its potential to reshape the future of education and beyond.

The birth of AI goes back to the 1950s when John McCarthy organised a two-month workshop at Dartmouth College in the USA. In the workshop proposal, McCarthy used the term artificial intelligence for the first time in 1956 (Russel & Norvig, 2010, p. 17) (Zawacki-Richter, 2019, p. 3), as he followed up on the work of Turing (Crompton & Burke, 2023, p. 2). Specifically, McCarthy’s use of the word “artificial intelligence” (AI) was intended to refer to machines and processes that imitate human cognition and make decisions like humans (Tlili, 2023, p. 1). There have certainly been lulls in the forward progress of AI since the coining of the term, and recent years have seen a significant change in artificial intelligence.

Currently, AI capability is developing rapidly (Sweeney, 2023, p. 2). At the end of 2022, Chat GPT developed by OpenAI was hailed as the most advanced intelligent machine closest to passing the Turing test, ushering in a new, vibrant era of artificial intelligence  (Yu, 2023, p. 02).

References

Crompton, H., & Burke, D. (2022). Artificial intelligence in K-12 education. SN Social Sciences, 2(7), 113. https://doi.org/10.1007/s43545-022-00425-5

Russel, S., & Norvig, P. (2010). Artificial intelligence – a modern approach. New Jersey: Pearson Education.

Sweeney, S. (2023). Who wrote this? Essay mills and assessment – Considerations regarding contract cheating and AI in higher education. The International Journal of Management Education, 21(2), 100818. https://doi.org/10.1016/j.ijme.2023.100818

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 1–24. https://doi.org/10.1186/s40561-023-00237-x

Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712. https://doi.org/10.3389/fpsyg.2023.1181712

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0

Pin It on Pinterest