Create a Newsletter in Canva

Create a professional and engaging classroom newsletter in Canva with this easy step-by-step tutorial for teachers. Learn how to set up your layout, add text and images, customize colors and fonts, and save or share your finished newsletter with families.

This tutorial is perfect for elementary and middle school teachers who want a simple, effective way to communicate classroom updates, upcoming events, and student highlights.

In this video, you’ll learn:
• How to choose a newsletter template in Canva
• How to customize text, fonts, and colors
• How to add photos and graphics
• How to download or share your newsletter

Getting Started in Canva – For Teachers

Welcome to the visual phase of curriculum design! In the posts for Notebook LM, we created some valuable learning resources using Google’s Deep Research feature in Gemini. Now we use Canva to effectively visualize that information, and we turn it into useful classroom resources that our students can navigate and understand.

This video is an absolute beginner’s guide, focusing on the features teachers need most: quickly creating historical infographics, building custom timelines of Canada’s path to sovereignty, and formatting assessments for maximum readability.

Learning Canva is the fastest way to add professional polish and boost student engagement, ensuring your hard-earned research doesn’t just sit on a page!

Deep Research for Teachers – Google Gemini

Google Gemini has a feature called “Deep Research”, and this tool goes beyond the typical chatbot. This AI feature rapidly synthesizes extensive, detailed research into a cohesive, structured resource, allowing you to instantly generate the foundational text for a new unit of study. It can translate complex curriculum outcomes and guiding questions into complete, curriculum-aligned learning materials, significantly streamlining the resource creation process.

You can also see the list of references that it used to create the deeply researched output that it provides.

AI Prompting Tips for Teachers – Get Better Results

Prompting is not a difficult skill to develop, and certainly, it’s not a required skill for using generative AI, but a good prompt does improve your odds of receiving the output you are hoping for. 

This video suggests the CTR approach to prompting. Context, Task, Refinement. Tell the chatbot what the role is they are playing (e.g. an instructional designer for grade 9 social studies). Specify the task you want the chatbot to do (e.g. create a list of ten projects students could choose from to demonstrate their understanding of how urbanization can shift and shape a society.) Then tell the chatbot how you want the output to look (e.g. generate a list of the project choices along with a brief description of what the student would be asked to do). Including those three pieces of information will improve the result you get from the tool.

NotebookLM to Create Classroom Resources

A continuation from the introduction to Notebook LM where we create a video resource, an infographic, a mindmap, a report, and an interactive quiz.

Now, the real magic begins inside NotebookLM. We’ll show you how to take that dense history text and instantly turn it into usable student resources. Watch us quickly generate mind maps, infographics, instructional videos, reports, and design custom quiz questions based specifically on the curricular content we gleaned from Gemini’s Deep Research. This is how you shortcut hours of manual content creation and get straight to the art and science of teaching!

NotebookLM To Create Amazing Classroom Resources

You know how much time it takes to map out a new unit, especially when dealing with super specific curriculum points, or new curriculum such as we will be receiving soon for grades 7, 8, and 9? This video shows  you how to use Notebook LM to help generate beautiful and robust learning resources for your classroom. This example used the new Alberta grade seven social studies curriculum, covering content including the evolution of the NWMP including the legal details of the Statute of Westminster. 

This tool is essentially your instant research assistant. It quickly synthesizes complex historical information and organizes it into a coherent, chapter-ready text that aligns perfectly with your detailed learning outcomes. This means you skip the headache of deep-diving into sources and get a ready-made content foundation, giving you back valuable time to focus on designing the fun, engaging activities for your students.

Getting Started in Gemini – For Teachers

This video is for anyone who has not yet used AI as part of their practice of teaching. This simple chatbot has the potential to save you several hours of work each week. If won’t feed your dog, or take your kids to hockey practice, but it can help you come up with ideas for your classroom, or create new resources. 

This video provides some tangible ideas to get you started in Gemini.

So take a deep breath and dive in.

Chatbots are like having an expert in all things, right at your fingertips. Just remember to read through the output that it generates in case there are errors.

I’m Afraid My Teacher will Accuse me of Using AI

What About the Really Good Kids?

How much time and energy are teachers putting in to catching the cheaters without considering the ramifications of threats about AI punishments on the really good students?

When teachers give long threatening speeches about what will happen to students who use generative AI, what impact does that have on students who would never do something like that?

IT IS TERRIFYING TO THEM.

They fear being accused of doing something they did not do. They fear that their academic achievements could be attributed to a machine, and how will they ever convince their teachers (or their professors) that they did not actually commit this act of academic misconduct? They are afraid to even touch the tools in case someone believes that they cheated.

It’s Easy to Focus on the Problem

In our classroom spaces, it’s easy to go down the rabbit hole of trying to prevent students from using AI to complete their work.

  • How many students in your class do you think would actually do it?
  • What is the cost to your high flyers when the focus is solely on inappropriate usage of generative AI? 
  • If we teach students to fear AI, are we preparing them for a world where AI is becoming increasingly ubiquitous?
  • Who is going to teach this generation of students how they CAN use AI?
  • Have you ever examined the flip side of this question?

Unintended Harm

This post is not about shaming teachers for trying to take control of a new disruptive technology. Generative AI has disrupted the process of teaching and learning. It has brought new challenges and new considerations and we are all figuring this out together.

In the process of conducting my research for my Doctorate in education, I’ve been fortunate to benefit from conversations relative to both K-12 education and post-secondary education, and I’ve come to realize that there is harm to students who hold themselves to a high academic standard when teachers and professors threaten with ramifications that may come to pass if the teacher suspects the student of cheating on a written assignment.

Can we Find a Happy Medium?

Teachers, you do need to have a plan in place for those times that a student does engage in some academically dishonest behaviours. Just like you have a plan in place for other behaviour infractions.

If you’d like some thoughts that may be helpful as to how you might navigate this challenge gracefully, please take a look at my blog post titled AI Detection Tools; it does not advocate for using those tools – there are far too many false positives in that environment – but it offers a script that in almost any conversation will let you get to the bottom of the issue without burning bridges or destroying your teacher-student relationship.

These challenges can be navigated. Kids behaviour needs to be corrected sometimes. But we don’t need to let AI take dignity away from either us, our curriculum, or most important of all, our students.

AI Detection Tools

Students and Generative AI

When the Turing Test was passed in November 2022 with the release of ChatGPT, things changed for teachers; especially teachers who rely on the essay as being their “gold standard” for assessment. Suddenly students can utilize generative AI to complete written work for them, leaving some teachers floundering.

AI detection tools like Turnitin or GPTZero are tempting to use. The teacher takes the studen’s written work, loads it into one of these detection tools, and the tool confirms or refutes the teacher’s perception that the written work may have been completed by AI. Easy, right?

There are actually a few problems in this scenario. We’ll go through them one at a time here.

They don’t work

It has been shown repeatedly in the empirical literature that AI detection apps fail, indeed research reveals that these detection tools remain unreliable (An & James, 2025; Moorhouse et al., 2023). Classroom climate can be quickly flushed by wrongly accusing a student of utilizing generative AI for an assignment.

The teacher-student relationship takes time to develop and it serves a powerful pedagogical purpose in the classroom. When the relationship is destroyed, it impacts not just the teacher and one student, it can have much larger impacts than that, and it would be a genuine shame for this pedagogical tool to be obliterated by a false result from an AI detector.

It’s an “Arms Race”

Villasenor (2022) stated that in the arms race between writing tools and detection tools “the AI writing tools will always be one step ahead of the tools to detect AI text” suggesting that as fast as the detection tools can catch up to the generative AI tools, the tool students use to write will also be moving ahead. Van Dis et al (2023) noted the same, stating that “such detection methods are likely to be circumvented by evolved AI technologies and clever prompts” (p. 225). This alone suggests that these tools will not yield the result that the sleuthing teacher is seeking.

One Step Ahead

It should also be noted that students who would choose to use AI to complete their writing for them will likely also use social media apps like TikTok to gain new techniques to conceal or “humanize” the text they are intending to submit. There are many content obfuscation techniques that a student may put into play if they are the type student who would undertake such an action.

How Prevalent is the Problem?

It’s difficult to get a gauge as to the actual numbers of students who will choose this method of cheating on their schoolwork. The companies who make these apps to detect academic. misconduct have impetus to make claims that are higher than what the literature would indicate the incidences of actual cheating usage, as they are selling a product. It’s in their best interest to make claims about the extremely high numbers of “gotcha” documents as a means of convincing potential customers that their product is valuable.

Should Students use Generative AI?

According to Weber-Wulff et al (2023), “the use of AI tools is not automatically unethical. On the contrary, as AI will permeate society and most professions in the near future, there is a need to discuss with students the benefits and limitations of AI tools, provide them with opportunities to expand their knowledge of such tools, and teach them how to use AI ethically and transparently” (p. 2). One of the basic beliefs teachers hold is that they are preparing students for the real world, and to that end, it will be important for teachers to make their peace that AI is here to stay, and in order to appropriately prepare students for a future that will include AI, it will fall to teachers to adjust the means of accomplishing assessment in ways that are beyond the essay. This is not to say that we must abandon the essay as an assessmentt mechanism; but it does demand some innovation and rethinking on the part of teachers everywhere.

So yes, students should use AI when it is appropriate to do so, and teaching students HOW to use AI ethically and appropriately is going to fall on the shoulders of teachers to do this.

So What Should a Teacher Do?

AI detectors don’t work, and a false accusation can destroy the teacher-student relationship. So, what should a teacher do if they suspect that a student has utilized generative AI to write a document that is part of the class assessment?

This is actually where you need to lean on the teacher-student relationship, and it can actually be an opportunity to further build that relationship, believe it or not.

But you’re going to have to play “Columbo” for a few minutes.

(Apologies to the younger generation who are not so familiar with the old Columbo movies. Columbo was a popular American mystery TV series where the title character, a seemingly bumbling but highly intelligent LAPD homicide detective, solves murders; basically seeming like he just couldn’t put the pieces of the mystery all together, when in reality he was a homicide detective who appears unassuming and disorganized to hide his sharp, observant mind.)

What I mean is that you’ll need to have a conversation with the student you suspect of having used generative AI to write their work, and you may have to lay it on a little bit thick. Maybe try something like this:

The Script

Teacher: So, I read your essay over the weekend, and wow!!! Has your writing ever improved this year!! When I compare what you turned in for this essay to the work you were writing in September (pick your date/time in the past) I am blown away.

(watch for signs of discomfort; fidgeting, facial redness, beads of sweat, aggression)

Teacher: Here’s the thing though, it’s my job to teach and assess the curriculum, and because your writing has improved so dramatically, it’s my job to make sure that you understand the outcomes on the [insert name of course you are teaching said student] program of studies. So, I’m going to ask you some clarifying questions to ensure that your comprehension of the curriculum is at the level that this writing would suggest that it is.

(continue to watch for signs of discomfort)

Teacher: When you wrote [insert phrase from student writing that seems unlikely that they actually wrote] what did you mean? How did you draw that conclusion? [Ask any question that occurs to you with respect to the writing they submitted.]  Just be Columbo. Be confused, don’t reveal your cards, and don’t make an accusation. 

At this point you will be precipitously close to having the student confess.

Ask another clarifying question. If the student actually wrote the work, they should have no problem answering your questions, and you should be able to actually mean your compliments of their writing if they are able to answer your questions.

If the student cannot answer your questions, but will not confess, provide them with a sheet of paper and a pen or a pencil and ask them to write a summary paragraph that would allow someone who has never heard of [insert topic of the essay here] before to understand the fundamental premises of the essay. 

Again at this point, you’re on the verge of the truth here, and no accusation has been made to the student. 

Factor what they write down on that sheet of paper into their grade.

Scale this to your Entire Class

You may want to consider scaling this summary task to your entire class. Have every student complete this task in the moments after they submit their essays to you for grading. It matters not whether they submit their work to you in a LMS like Google Classroom, or if they print their work and hand it in. Ask every student in your class to take out a single sheet of paper and a pen or a pencil. They need to write a summary of their essay in class without the essay or a computing device. Just a pen and paper summary of what they just handed in to you. 

If you build the accountability in to your system, they’ll choose to use AI for someone else’s essay. You won’t be the target of this misbehaviour for long.

References

An, Y., & James, S. (2025). Generative AI Integration in K-12 Settings: Teachers’ Perceptions and Levels of Integration. TechTrends. https://doi.org/10.1007/s11528-025-01114-9

Moorhouse, B. L. (2024). Beginning and first-year language teachers’ readiness for the generative AI age. Computers and Education: Artificial Intelligence, 6, 100201. https://doi.org/10.1016/j.caeai.2024.100201

Van Dis, E. A. M., Bollen, J., Zuidema, W., Van Rooij, R., & Bockting, C. L. (2023). ChatGPT: Five priorities for research. Nature, 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7

Villasenor, J. (2023). How ChatGPT Can Improve Education, Not Threaten it. [EB/OL] [2023-05-14]. Available online at: https://www.scientficamerican.com/article/ how-chatgpt-can~improve-education-not-threaten-it/

Algorithmic Governance, Epistemic Fracture, Surveillance and Visibility

Introduction

We are living in a time when the pace of technology moves so quickly that all sectors of society are in constant flux, adjusting to the changes that continually roll out from technological innovators. To situate the pace of technological transformation, we need only consider that in 1965, microchip engineer Gordon Moore, cofounder of the Intel Corporation, famously observed that the number of components on a microchip were doubling every year, which resulted in technological advancements continually improving while simultaneously becoming more affordable. Shalf and Leeland summarized Moore’s Law as the prediction “that this trend, driven by economic considerations of cost and yield, would continue for at least a decade, although later the integration pace was moderated to doubling approximately every 18 months” (2018, p. 14). This already incredible rate of change has brought forth new challenges and considerations to countries and cultures everywhere. Modern humans are inundated with information, news, communication, and a wide array of other notifications from all manner of devices. With this ease of information flow and data consumption, new challenges have arisen, not the least of which is the concept of algorithmic personalization, also referred to as algorithmic governance, or algocracy (Aneesh, 2006). Defined as “the probability that a set of coded instructions in heterogeneous input-output computing systems will be able to render decisions without human intervention and/or structure the possible field of action by harnessing specific data” (Issar & Aneesh, 2022, p. 3), algorithmic governance exists, behind the scenes, and largely unnoticed in many of our digital interactions. Notwithstanding the fact that “algorithms are a powerful if largely unnoticed social presence” (Beer, 2017, p. 2), they appear to not be a topic of concern to many people beyond those who work in technology. Regardless of the lack of popular concern, the algorithms that operate in the background of the technologies we engage with are a powerful social influence, holding the potential to control the flow of information (Alkhatib & Bernstein, 2019; Harris, 2022; Hobbs, 2020), the credibility of the information (Connolly, 2023; Blake-Turner, 2020; Harris, 2022; Hobbs, 2020; Issar & Aneesh, 2022), the surveillance of the people (Issar & Aneesh, 2022), and the visibility of the people (Bucher, 2012; Hoadley, 2017) who use the technology. The fact that “authority is increasingly expressed algorithmically” (Pasquale, 2015, p. 7-8) should present concerns to learning scientists, as hegemonic processes, epistemic stability, obscured voices, and human agency sit at the core of the Learning Sciences, as it aims to “productively address the most compelling issues of learning today” (Philip & Sengupta, 2021, p. 331). Though some authors use the term ‘algorithmic personalization’, to continue to underscore the power wielded by the ubiquitous algorithms, I will use the term algorithmic governance throughout this paper.

Flow of information and misinformation

The first topic to address is that of the flow of information as “today, algorithmic personalization is present nearly every time users use the internet, shaping the offerings displayed for information, entertainment, and persuasion” (Hobbs, 2020, p. 523). This brings forward the obvious epistemic question: who decides which items are brought to the user’s attention, and equally importantly, what is not brought to the user’s attention? The lack of transparency of the algorithms (Alkhatib & Bernstein, 2019, p. 3) coupled with the fact that even those who create algorithms cannot fully understand the machine-learning mechanisms by which the decisions are reached (Hobbs, 2020; Rainie & Anderson, 2017), creates a perplexing and nebulous problem: we don’t actually know who controls our flow of digital information. This creates an epistemic fracture, in the sense that the manner in which information is delivered to the user is unknown, and the accuracy of the information being delivered may or may not be true. Societies across the world are facing intense social and political polarization (Conway, 2020, p. 3), and the role that the algorithms play in reinforcing problematic beliefs is complicit in the creation of this fragmentation.

A quick glance at the creators and CEOs of a few of the major technology companies (Google [Alphabet] Facebook [Meta], Twitter [X] and Amazon) suggests a possibility that white males have dominated the industry to date, and it would be illogical to assume that the algorithms, written by white, western, colonial settlers, would be void of any human bias. Hobbs summed it up succinctly saying that “algorithms are created by people whose own biases may be embodied in the code they write” (2020, p. 524). This assumption demands attention, as the potential to continue the hegemonic control of information exists within the algorithms. Considering the colonial mindset upon which Canada and the United States were founded, asking questions about who is determining the content we consume digitally is imperative; our history is one of enslavement and White dominance as opposed to one of collaboration and equality, and this legacy may now play a silent, covert role in our digital society today. We need only look to our recent history to see that our history in print books served to perpetuate the domination of white culture, which King and Simmons sum up saying “in many traditional history textbooks, history moves through a paradigm that is historically important to the dominant White culture (2018, p. 110). It does not seem a leap in logic to assume that at least some of the algorithms underlying the digital technologies we use on a daily basis may be complicit, as textbooks have been, in focusing the attention of the user back onto a White gaze. Marin’s statement about Western assumptions that they “often tacitly work their way into research on human learning and development and the design of learning environments” (2020, p. 281) underscores not only the possibility, but indeed the likelihood that the oppression is ongoing today.

This suspicion of control is compounded by occasional changes that are actually visible. An example of this is Elon Musk arbitrarily changing the information flow on Twitter, including enforcing users to have a Twitter login to view tweets, then silently removing this limiting requirement, to instead limit the number of tweets a person would be permitted to read in a given day (Warzel, 2023). To compound the dubious nature of these changes, Musk is a “self-professed free-speech ‘absolutist’” (Warzel, 2023), a statement that serves not to alleviate, but rather to underscore reasons to be suspicious of his platform and its algorithm, as some of his statements that he has personally made ‘freely’, have revealed him to be duplicitous (Farrow, 2023). It is worthwhile, however to note that many users have taken a break from this platform, have left it entirely, or do not see themselves being active on that platform a year down the line (Dinesh & OdabaŞ, 2023) since Musk’s takeover, and ongoing rebranding and changing of the platform. Beyond the fact that when the majority of users signed up for Twitter, these restrictions (as well as the eased restrictions) were not what the users signed up for or agreed to; it should be noted that when Obar and Oeldorf-Hirsch updated the academic literature regarding people’s reading of the user agreements, the previous research was supported, and their summary revealed that “individuals often ignore privacy and TOS policies for social networking services” (2020, p. 142). So, although the user experience on Twitter has changed since Musk’s acquisition, it should not be suggested that users would not have agreed to these terms and conditions, as they would not likely have read these terms.

Credibility of information

A second major consideration as it pertains to algorithmic governance is the concept of credibility of the information we encounter online. We have already established that the flow of information is controlled, shaped, eased, and released algorithmically. These same algorithms are also responsible for the broad distribution of the barrage of disinformation and fake news in recent years. Misinformation is content that circulates online containing untrue information, but the intention behind it is, at least in some cases innocent, in that the person sharing it believed it to be real. Altay et al. defined misinformation as being “in its broadest sense, that is, as an umbrella term encompassing all forms of false or misleading information regardless of the intent behind it” (2023, p. 2). Fake news, on the other hand, has a more specific definition as it is deliberately untrue. Springboarding from the definition Rini (2017) proposed for fake news, Blake-Turner defined fake news as 

one that purports to describe events in the real world. Typically by mimicking the conventions of traditional media reportage, yet is [not justiciable believed by its creators to be significantly true], and is transmitted [by them] with the two goals of being widely re-transmitted and of deceiving at least some of its audience” (2020, p. 2). 

Politicians and leaders regularly engage in the creation and promotion of fake news in their campaigns, news releases, and press conferences in their quest to maintain their voting base, and whenever possible to increase it. This fake news is then shared and redistributed by followers of the political party responsible for the fake news, it is run through the algorithms that govern information, and is then delivered to the people who are most likely to believe it.

Lying is by no means a new skill in the world of politics. From the beginnings of democracy, impressing the voter in some capacity has been important to gaining or retaining power. “The importance of the political domain ensures that some parties have good pragmatic reason to fake such content – a point illustrated b y the long history of misleading claims and advertisements in politics” (Harris, 2022, p. 83). The newcomer in this is the ability of the common person to create content that appears to be true. In the past, news and information was communicated through television, newspapers, magazines, and books; all of which involved an editor who would carefully read through all manuscripts and determine their publication value. Today, anyone with a computer, some simple photo editing apps, and a commitment to an idea can create content that not only seems real, it is entirely believable. Our older generations have lived the majority of their lives in a time when published material had already been vetted, and to them, published materials were factual. Now they, along with the younger generations, are faced every day with realistic fakes that challenge everyone to question the truth of practically everything encountered online. Places that were once able to deliver accurate and factual knowledge are now deceptive, and at times are even difficult to fact check.

Deepfakes are a newcomer to the world of publishing that usher in an even deeper level of falsehoods, obscuring of facts, and incredibly inauthentic yet lifelike video footage. “The term ‘deepfake’ is most commonly used to refer to videos generated through deep learning processes that allow for an individual’s likeness to be superimposed onto a figure in an existing video” (Harris, 2022, p. 83). Our epistemic environment has already been compromised by the prevalence of untrue words typed on the screen, along with compellingly falsified photos and images, and now we are facing the corruption of that which was previously seen to be the “smoking gun” of truth; the video evidence. Harris also informed that at the time of his writing, a mere year ago, deepfakes remained relatively unconvincing; with the sudden advent of AI, deepfakes have already grown increasingly more realistic.

Misinformation and fake news create epistemic problems in modern society. Blake-Turner defined an epistemic environment as including “various things a member of the community is in a position to know, or at least rationally believe, about the environment itself” (2020, p. 10). The key words in the definition: rationally believe, underscore how the existence of fake news and deepfake technology create tensions between what is fact, and what is believed to be fact. At the time of this writing, the former American President has been indicted four times, in four different states, facing 91 felony (Baker, 2023) charges, almost all of which relate to lying, misinformation, and ultimately turning those lies into action to attempt to overturn the results of the 2020 federal election. In an illustration of the severity of the epistemic problem, those lies and his disinformation resulted in an attack on the U.S. Capitol on January 6, 2021, they have resulted in hundreds of people being sentenced to jail time for their participation in that riot, police have been beaten and killed, and the threats of violence and riots continue from this former president. Blake-Turner helps make sense of the manner in which these events came to pass: “the more fake news stories that are in circulation, the more alternatives are made salient and thereby relevant – alternatives that agents must be in a position to rule out (2020, p. 11). The onslaught of lies, misinformation and political propaganda have created an environment where many people have struggled to find the truth in the midst of all this chaos and deceit.

Surveillance of the people

Another crucial element of the algorithms that live in our midst is the subtle, yet ongoing surveillance of the people who use the technology. As users of networked technology, we should all be aware that surveillance could be occurring, but the extent to which it is really happening should be of concern. “While the problem of surveillance has often been equated with the loss of privacy, its effects are wider as it reflects a form of asymmetrical dominance where the party at the receiving end may not know that they are under surveillance” (Issar & Aneesh, 2021, p. 7). Foucault described surveillance through what he termed panopticism, describing an architectural arrangement whereby people were always being watched. Arguably, the pantopticon has been created virtually via the digital trails we create when we utilize networked technologies. In 2016 the British firm Cambridge Analytica reported having 4,000 data points on each voter in the United States; data which included some voluntarily given data, but also much subversive data, including data gathered from Facebook, loyalty cards, gym memberships and other traceable data (Brannelly, 2016). While these numbers are shocking, it is made worse by the fact that this data was sold, and then used by the Republican party to target and influence undecided voters to vote for Donald Trump in the 2016 election. This aligns with the statement made by Issar and Aneesh that “while the problem of surveillance has often been equated with the loss of privacy, its effects are wider as it reflects a form of asymmetrical dominance where the party at the receiving end may not know that they are under surveillance (2021, p. 7). The American voters were oblivious to the fact that their data was being collected in the manner that it was, that their personal data was being amassed and collected into one neat package, and that this package was being sold for the purposes of manipulating emotions to achieve the political goal of one party. Ironically, as his court dates approach, even the duplicitous former president of the United States could not escape surveillance; his movements, messages, conversations, and other interactions were also recorded, and though he continues to lie about his actions, and manipulate some public perception of some of his deeds, he has been unable to exert enough control to not, eventually, be exposed. The ongoing misinformation campaign, however, and the algorithmic governance will continue to provide his supporters with images, words, articles and ideas that uphold their damaged, and inaccurate beliefs.

In the model of surveillance, everyone is being watched, everyone is visible. Bucher stated that “surveillance thus signifies a state of permanent visibility” (2012, p. 1170), however, “concerns about the privacy impact of new technologies are nothing new” (Joinson et al., 2011 p. 33). Within networks, and social media there exists a privacy paradox whereby when individuals are asked about privacy, “individuals appear to value privacy, but when behaviors are examined, individual actions suggest that privacy is not a priority” (Norberg et al., 2007; Obar & Oeldorf-Hirsch, 2022, p. 142). After hastily clicking “accept” to the user agreement, we navigate through the internet, viewing personalized advertisements and “information” nuggets that align with our personal interests, and we grow increasingly oblivious to the fact that this “algorithmic personalization is part of what is termed surveillance capitalism, “the practice of translating human experience into data that can be used to make predictions about behavior” (Hobbs, 2020, p. 523). We are seen by someone, somewhere, every time we make a purchase, click like on a video or social media share, swipe our points card at a store, drive past someone’s ring camera, plug in our electrical vehicle to charge, and myriad other activities too numerous to mention. This surveillance contributes to the data points that are logged for every individual. 

Visibility of the people

The surveillance and visibility of all people through the algorithms that collect data should not be confused with people online being visible. Indeed, the algorithms behind many technologies serve to enforce and underscore the prejudiced paradigms often enacted in the face to face world. Huq reported that “police, courts, and parole boards across the country are turning to sophisticated algorithmic instruments to guide decisions about the where, whom, and when of law enforcement” (2019, p. 1045). This is a terrifying prospect for people of marginalized communities who have been historically targeted by the law. Alkhatib et al. summed up the findings of researchers saying “these decisions can have weighty consequences: they determine whether we’re excluded from social environments, they decide whether we should be paid for our work, they influence whether we’re sent to jail or released on bail” (2019, p. 1). The faceless anonymity afforded by the internet is not equally afforded, as the algorithms that follow us on our digital paths ensure that our life is logged and then is mathematically and computationally assessed and delivered back to us through the algorithmic governance. 

Geography has historically defined the physical location of a person on the globe, however in a globalized world involving networked interactions, the definition needs to extend to the places we visit online. Researchers have argued that “space is not simply a setting, but rather it plays an active role in the construction and organization of social life which is entangled with processes of knowledge and power” (Neely & Samura, 2011; Pham & Philip, 2021). A lens of critical geography is warranted as we consider the impact and implications of the algorithmic power we engage with daily. 

Although the concept of the digital divide has been a topic amongst educators since the term was first coined in the 1990s, it has by and large been limited to the concept of students having access to digital devices by which to access the information contained on the internet. The digital divide and critical geography must intersect when we examine online interactions to ascertain not only the status of the devices our students have access to, but also the subliminal reinforcers of racism, marginalization and ontological oppression embedded in the digital landscape. Gilbert argued that “‘digital divide’ research needs to be situated within a broader theory of inequality – specifically one that incorporates an analysis of place, scale, and power – in order to better understand the relations of digital and urban inequalities in the United States” (2010, p. 1001)., a statement easily extended to include Canada. The digital divide must also include the racialized experience of minorities and people of colour; insomuch as people of colour encounter advertisements online that differ from those being shown to white people, they also experience challenges such as the errors frequently made by facial-recognition systems as it has been noted that these systems “make mistakes with Black faces at far higher rates than they do with white ones” (Issar & Aneesh, 2021, p. 8). As a continent with a history of antiblackness and racism, we must be aware of “the micro and macro instances of prejudices, stereotyping, and discrimination in society directed toward persons of African descent – stems largely from how historical narratives present Black people” (King & Simmons, 2018, p. 109), not only because we have a past that facilitated racism, but because this racism is ongoing.

As an illustration of the power of the algorithm, we can look to the recent news coming out of the state of Florida. Under the current governor, Ron Desantis, the same governor who enacted the “Stop Woke Act”, and the “don’t say gay” restriction, the Black history curriculum has recently been changed to include standards that promote the racist idea that in some way slavery benefited Black people, and any discussion about the Black Lives Matter movement has been silenced in Florida schools (Burga, 2023). Upon learning of this unimaginable educational situation in Florida, I conducted a search on YouTube to try to learn more, and this search for information served to underscore Issar and Aneesh’s assertion that “one of the difficulties with algorithmic systems is that they can simultaneously be socially neutral and socially significant” (Issar & Aneesh, 2021, p. 7). My search was socially neutral when I was merely seeking more information about a current event in the state of Florida. It changed to become socially significant in the days following this quest for knowledge. What transpired after this search was a semi-bombardment of what I would categorize as racial propaganda within my device; not restricted only to my YouTube application. One brief search to learn more about a shocking topic has led to the algorithm seeking not only content that informs me as to what is occurring in Florida politics, but also providing me suggestions for content that supports what is occurring in Florida; content that I do not want to have brought to my attention repeatedly. Over time, repeated exposure to problematic or blatantly false information lends the user to begin to think that there are lots of people who believe this, and there is strength in numbers. If many people believe something to be true, it must then be true.

This is problematic in obvious ways, but there are also potential subtle ways that the algorithm continues to exert its power. Imagine that a teacher teaching a particular concept conducts a search to support the lesson. If the teacher has searched, for instance, something that is questionable in its factuality, something that contains racist tropes or other examples of symbolic violence, the content that this teacher will continue to be exposed to after the search will reinforce the existence of that biased and potentially harmful perspective. Further, as the teacher shares her screen before the class during instruction, there is a distinct likelihood that students will see the results of this search appearing potentially in advertisements, recommended videos in YouTube, as well as in the results of this teacher’s Google Searches. Beyond the potential for professional discomfort resulting from algorithmically suggested content, lies the epistemic problem that this content is being recycled and presented as true, realistic, informative, valuable content. In this we see what Beer warned: “power is realised in the outcomes of algorithmic processes.” (2017, p. 7). While this might produce an opportunity to teach students about algorithms and the subversive power they possess, across society algorithmic awareness is only an emergent conversation for the majority of people, implying that the teacher may not possess the language or skillset to explain the unsolicited content that is displayed on the screen during instructional time. 

This is not to suggest there is no hope, and that our classrooms will be victims of algorithmic governance in the long-term. “We are now seeing a growing interest in treating algorithms as object of study” (Beer, 2017, p. 3), and with this interest will come new information for understanding, and combatting the reality of algorithmic presence. Hobbs argued that “We should know how algorithmic personalization affects preservice and practicing teachers as they search for and find online information resources for teaching and learning” (2020, p. 525). I would extend that statement to include all teachers, preservice and experienced, as algorithmic governance impacts everyone.

Conclusion

The power held by the opaque algorithms that control the flow, and the visibility of digital information presents what Rittel and Webber (1973) would call a wicked problem. Wicked problems lack the clarifying traits of simpler problems with the term wicked meaning malignant, vicious, tricky, and aggressive (p. 160). Existing with the secret phantom, the algorithm that shapes and changes our access to information is, indeed, a wicked problem. Hobbs stated that ”given the many different ways that algorithmic personalization affects peoples’ lives online, it will be important to advance theoretical concepts and develop pedagogies that deepen our understanding of algorithmic personalization’s potential impact on learning” (2020, p. 525). 

Further algorithmic challenges await in the near future as we move toward a future infused with ubiquitous AI. Algorithms have brought a new type of manipulation into the digitally connected world, with the potential to further increase the polarization already being experienced in our modern society. Artificial intelligence presents a new wicked problem for education as we consider its impact on assessment, plagiarism, contract cheating and myriad other relevant topics that will reveal themselves as this new technological revolution unfolds. Educational researchers will need to continue to interrogate and explore the powers behind the algorithms that impact all digital users worldwide to advance accurate, equal, ethically responsible dissemination of information.

 

References

Alkhatib, A., & Bernstein, M. (2019). Street–level algorithms: A theory at the gaps between policy and decisions. Conference on Human Factors in Computing Systems – Proceedings. https://doi.org/10.1145/3290605.3300760

Altay, S., Berriche, M., & Acerbi, A. (2023). Misinformation on Misinformation: Conceptual and Methodological Challenges. Social Media and Society, 9(1). https://doi.org/10.1177/20563051221150412

Aneesh, A.  (2006). Virtual migration: The programming of globalization. Duke University Press.

Baker, P. (2023, August) Trump indictment, part iv: A spectacle that has become surreally routine. The New York Times. https://www.nytimes.com/2023/08/14/us/politics/trump-indictments-georgia-criminal-charges.html 

Beer, D. (2017). The social power of algorithms. In Information Communication and Society (Vol. 20, Issue 1, pp. 1–13). Routledge. https://doi.org/10.1080/1369118X.2016.1216147

Blake-Turner, C. (2020). Fake news, relevant alternatives, and the degradation of our epistemic environment. Inquiry (Oslo), ahead-of-print(ahead-of-print), 1–21. https://doi.org/10.1080/0020174X.2020.1725623

Branelly, K. (2016). Trump campaign pays millions to overseas big data firm. NBC News. https://www.nbcnews.com/storyline/2016-election-day/trump-campaign-pays-millions-overseas-big-data-firm-n677321 

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. https://doi-org.ezproxy.lib.ucalgary.ca/10.1177/1461444812440159

Burga, S. (2023 July). Florida approves controversial guidelines for black history curriculum. Here’s what to know. Time. https://time.com/6296413/florida-board-of-education-black-history/ 

Connolly, R. (2023). Datafication, Platformization, Algorithmic Governance, and Digital Sovereignty: Four Concepts You Should Teach. ACM Inroads, 14(1), 40–48. https://doi.org/10.1145/3583087

Conway, K. (2020). The art of communication in a polarized world. AU Press.

Dinesh, S. & OdabaŞ, M. (2023, July). 8 facts about americans and twitter as it rebrands to X. Pew Research. https://www.pewresearch.org/short-reads/2023/07/26/8-facts-about-americans-and-twitter-as-it-rebrands-to-x/ 

Farrow, R. (2023 August). Elon Musk’s shadow rule. The New Yorker. https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule 

Foucault M (1977) Discipline and Punish: The Birth of the Prison. London: Allen Lane

Gilbert, M. (2010). Theorizing digital and urban inequalities: Critical geographies of “race”, gender and technological capital. Information, Communication & Society, 13(7), 1000–1018. https://doi.org/10.1080/1369118X.2010.499954

Harris, K. R. (2022). Real Fakes: The Epistemology of Online Misinformation. Philosophy & Technology, 35(3), 83–83. https://doi.org/10.1007/s13347-022-00581-9

Hobbs, R. (2020). Propaganda in an Age of Algorithmic Personalization: Expanding Literacy Research and Practice. Reading Research Quarterly, 55(3), 521–533. https://doi.org/10.1002/rrq.301

Huq, A. Z. (2019). Racial equity in algorithmic criminal justice. Duke Law Journal, 68(6), 1043–1134.

Issar, S., & Aneesh, A. (2022). What is algorithmic governance? Sociology Compass, 16(1). https://doi.org/10.1111/soc4.12955

Joinson, A., Houghton, D., Vasalou, A. Marder, B. (2011). Digital crowding: Privacy, self-disclosure, and technology. In Trepte, S., & Reinecke, L. (Eds.), Privacy Online (pp. 33-45). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-21521-6

King, L. J., & Simmons, C. (2018). 4 Narratives of Black History in Textbooks: Canada and the United States. S. A.

Neely, B., & Samura, M. (2011). Social geographies of race: connecting race and space. Ethnic and Racial Studies, 34(11), 1933–1952. https://doi.org/10.1080/01419870.2011.559262

Norberg, P., Horne, D. R., & Horne, D. A. (2007). Privacy Paradox: Personal Information Disclosure Intentions versus Behaviors. The Journal of Consumer Affairs, 41(1), 100–126. https://doi.org/10.1111/j.1745-6606.2006.00070.x

Obar, J. A., & Oeldorf-Hirsch, A. (2020). The biggest lie on the Internet: ignoring the privacy policies and terms of service policies of social networking services. Information Communication and Society, 23(1), 128–147. https://doi.org/10.1080/1369118X.2018.1486870

Pasquale, F. (2015). The black box society : the secret algorithms that control money and information. Harvard University Press.

Pham, J., & Philip, T. (2021). Shifting education reform towards anti-racist and intersectional visions of justice: A study of pedagogies of organizing by a teacher of Color. Journal of the Learning Sciences, 30(1), 27–51. https://doi.org/10.1080/10508406.2020.1768098

Philip, T., & Sengupta, P. (2021). Theories of learning as theories of society: A contrapuntal approach to expanding disciplinary authenticity in computing. Journal of the Learning Sciences, 30(2), 330–349. https://doi.org/10.1080/10508406.2020.1828089

Rainie, L. & Anderson, J. (2017, May). The future of jobs and jobs training. Pew Research. https://www.pewresearch.org/internet/2017/05/03/the-future-of-jobs-and-jobs-training/

Rini, R. (2017). Fake news and partisan epistemology. Kennedy Institute of Ethics Journal, 27(2), E–43–E–64. https://doi.org/10.1353/ken.2017.0025

Rittel, H. W. J., & Webber, M. M. (1972). Dilemmas in a general theory of planning. Institute of Urban and Regional Development, University of California.

Shalf, J. & Leland, R. (2015). Computing beyond Moore’s Law. Computer, 48(12), 14-23. doi: 10.1109/MC.2015.374.

Translated by Content Engine, L. L. C. (2023, Feb 08). Is it the end of Moore’s Law? Artificial intelligence like ChatGPT challenges the limits of physics. CE Noticias Financieras https://ezproxy.lib.ucalgary.ca/login?qurl=https%3A%2F%2Fwww.proquest.com%2Fwire-feeds%2Fis-end-moores-law-artificial-intelligence-like%2Fdocview%2F2774910208%2Fse-2%3Faccountid%3D9838

Warzel, C. (2023, July). Elon Musk Really Broke Twitter This Time. The Atlantic. https://www.theatlantic.com/technology/archive/2023/07/twitter-outage-elon-musk-user-restrictions/674609/ 

 

I got my Ethics Approval!

I got the green light today! 

The ethics process is not an interesting one to blog about, but it is a crucial step in the research process. The questions in the ethics application delve deeply into the rationale for conducting the research, but more importantly, the impact that the research may have upon participants. The application was completed by me, with my supervisor as the Principal Investigator. She assisted me in ensuring that the appplication was thoroughly completed.

The application is then reviewed by the Institutional Research Information Services Solution (IRISS) and they respond with items that need clarification and/or attention. After a couple of back and forth online conversations regarding the needed revisions, my application was approved.

I then had to file the approved paperwork with the school district I will be working with for my research as they require the paperwork 30 days in advance of the commencement of my research. I have submitted that already, as I am hoping to deploy my survey on August 20, as there is a looming threat of a teacher strike occurring early this fall. If I am going to have to be on strike, I’d like to be conducting the data analysis while that happens!

I am now a Doctoral Candidate!

I passed it!!

I passed my candidacy exam this morning! The above images reveal my nervousness in the moments leading up to the Zoom exam, and in the moments at the end. Let me explain.

The photo of the papers are my specific research questions as they are worded in my proposal, and the propositions that I have put forth as part of my case study methodology. I anticipated that I might freeze and then panic trying to recall exactly how I worded them in the final proposal, and words matter. The last thing I wanted to do was to misquote myself with respect to where the final wording landed for the questions and end up babbling!!

The photo on the right is of the esteemed faculty who served as my examination committee. I forgot to ask permission to post a photo to blog about my experience, so I have blurred all individuals as they were not offered an opportunity to decline.

What is a Doctoral Candidacy Exam like?

I can only speak to my personal experience, but if you are curious, this is how it played out:

In advance of the exam, I met with my Candidacy; a group that comprises my incredible supervisor, and two other faculty members who are experts in the field of studies where my specific research has landed. We selected two other faculty members (both were from UCalgary as well, and when I defend, there will need to be a member from another institution, but for candidacy, the examinors can all be from UCalgary) and my proposal was provided to them several weeks prior to the exam.

A seventh professor particpates in the examination as the “neutral chair”; and their job is to ensure that times are adhered to, and that protocols are followed. As I understand it, this allows the other professors to focus on the examination as someone else is watching the clock.

To start the exam, I was given the first fifteen minutes to give a presentation to the group about my research and my proposal. Upon completion of my presentation, each examiner, beginning with the professor who is “farthest from my research” asked me questions about my research. I then had ten minutes in which to respond to the questions. I was allowed to take my time in considering my responses, and if I wished to consult my paperwork, notes, etc. that was allowable. But ten minutes to respond is actually a fairly truncated period of time, so it was important to be well-versed and confident in my research intentions. Then the second examiner asked a question and again, I had ten minutes to respond. The questions then moved to the members of my Candidacy Committee, each had the same opportunity to pose questions about my research, and again, I had ten minutes to respond to each. The last to question me was my Supervisor.

We then took a 5 minute break.

And then we repeated the above process.

At the end of the second round of questioning, I logged out of Zoom entirely to allow the examiners to discuss the status of my candidacy. 

While they were only discussing for a matter of minutes, not hours, it felt much longer than it was.

But with a unanimous decision, they declared that I had passed the exam, and I am now a doctoral candidate, and I can proceed with completing my ethics application to the university to earn the green light to conduct my research!

Take the Challenge! Make this the Best Year Ever!

Download our free planner here!!

A great school year is built on great relationships…. for both teachers and students. The best learning occurs in classrooms where relationships are prioritized. 

Our free planner provides you an EASY strategy to take control of those relationships in a deliberate, equitable, targeted manner where all student strengths will be celebrated.

Developed from the research literature on the Teacher-Student relationship, this planner lays out a strategic approach for the coming school year to easily build great relationships with every student, and their families. 

Citations for the references contained in the planner are listed at the bottom of this page.

References

Ainsworth, M. D. S., Blehar, M. C., Waters, E., & Wall, S. (2015). Patterns of attachment: A psychological study of the strange situation. Routledge. (Original work published in 1979).

Ang, R. (2005). Development and Validation of the Teacher-Student Relationship Inventory Using Exploratory and Confirmatory Factor Analysis. The Journal of Experimental Education, 74(1), 55–74. https://doi.org/10.3200/JEXE.74.1.55-74

Ang, R. P., Ong, S. L., & Li, X. (2020). Student Version of the Teacher–Student Relationship Inventory (S-TSRI): Development, Validation and Invariance. Frontiers in Psychology, 11, 1724. https://doi.org/10.3389/fpsyg.2020.01724

Aultman, L. P., Williams-Johnson, M. R., & Schutz, P. A. (2009). Boundary dilemmas in teacher–student relationships: Struggling with “the line.” Teaching and Teacher Education, 25(5), 636–646. https://doi.org/10.1016/j.tate.2008.10.002

Birch, S. H., & Ladd, G. W. (1996). Interpersonal relationships in the school environment and children’s early school adjustment: The role of teachers and peers. In J. Juvonen & K. Wentzel (Eds.), Social motivation: Understanding children’s school adjustment. New York: Cambridge University Press.

Corbin, C. M., Alamos, P., Lowenstein, A. E., Downer, J. T., & Brown, J. L. (2019). The role of teacher-student relationships in predicting teachers’ personal accomplishment and emotional exhaustion. Journal of School Psychology, 77, 1–12. https://doi.org/10.1016/j.jsp.2019.10.001

Hamre, B. K., & Pianta, R. C. (2001). Early Teacher-Child Relationships and the Trajectory of Children’s School Outcomes through Eighth Grade. Child Development, 72(2), 625–638. https://doi.org/

10.1111/1467-8624.00301

Hattie, J., & Yates, G. (2013). Visible learning and the science of how we learn. Routledge. https://doi-org.ezproxy.lib.ucalgary.ca/10.4324/9781315885025

Peter, F., & Dalbert, C. (2010). Do my teachers treat me justly? Implications of students’ justice experience for class climate experience. Contemporary Educational Psychology, 35(4), 297–305. https://doi.org/10.1016/j.cedpsych.2010.06.001

Quin, D. (2017). Longitudinal and contextual associations between teacher–student relationships and student engagement: A systematic review. Review of Educational Research, 87(2), 345–387. https://doi.org/10.3102/0034654316669434

Stuhlman, M. W., & Pianta, R. C. (2002). Teachers’ narratives about their relationships with children: Associations with behavior in classrooms. School Psychology Review, 31(2), 148–163. https://doi.org/10.1080/02796015.2002.12086148

Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. V. MCole, S. John-Steiner, S. Scribner & E. Souberman (Eds.). Cambridge, MA: Harvard University Press.

Wentzel, K. R. (1997). Student motivation in middle school: The role of perceived pedagogical caring. Journal of Educational Psychology 89(3), 411-419.

Masterclass in Graduate Studies Organization

Completing a graduate degree while working full-time, having a family, and wanting to still have some personal time requires planning and deliberate strategies. As a specialist in education and educational technology, I have developed a simple, but layered plan through which to complete my doctoral degree with minimal stress. 

In the video below, I outline for you how to set yourself up to enjoy your degree, experience success, and feel in control of the process every step of the way.

Through the use of an iPad equipped with the app Goodnotes, and a computer with Zotero and Google slides, I have limited my paper consumption significantly, and have streamlined my research process.

ChatGPT – Getting Started Beginner’s Guide

ChatGPT is the AI tool you’ve probably heard the most about. OpenAI made big headlines after deploying this chatbot tool in November, 2022.

If you haven’t taken it for a test drive yet, and don’t want to ask someone to show you how to get started, this quick video will take you through setting up your account and entering your first prompts.

This is fun, it’s amazing, and it has the potential to reduce some of the burdens of teaching. You need to check this out as soon as possible!!

Pin It on Pinterest