teaching philosophy #
A la Nancy Chism, I’ll use this teaching philosophy to define “learning” and “teaching”, lay out goals for my students, describe how I implement and assess my methods, and relay ways in which my teaching develops over time.
what is learning? #
My teaching philosophy is rooted in the idea that effective learning is intrinsically motivated and internally manifested (Augustyniak et al., 2016). In other words, learning is not effective unless it is (a) actively sought out by the student, and (b) inextricably tied to the student’s inner experience. In my classes, students aren’t “receiving” outside knowledge about concepts, but they are discovering it in themselves. For this reason, I aim to give students every opportunity to find “their way of understanding” concepts. Indeed, the learning process is a way for students to exert their inner self, and who they are as a person should play a critical part in their learning — this should ultimately make the experience interesting, worthwhile, and especially fulfilling for them.
As an example, I had a struggling student in my math class explain a particular calculus concept by comparing it to the relationship between an airplane’s airspeed and its ground speed. Not only was their analogy unique, I had never heard it used to describe the concept before in my life. Further, the description he gave was accurate! This kind of interpretation of a lesson that is completely unique to a students’ own personal experience is strong evidence of the deep processing (Dinsmore & Alexander, 2012) I hope students gain in my courses.
what is teaching? #
For this kind of effective learning to happen, an instructor must equip and empower students to challenge themselves in their learning. I.e., it is the student who is “challenging” themself, not anyone else. As an instructor, it is my job to create an environment where this sort of thing will naturally occur. So, I use my experience in the areas which I teach to provide the following:
- enticing self-challenging mechanisms (assignments)
- tools for self-evaluating learning (metacognitive surveys)
- a practice-feedback loop to stimulate the self-challenging (feedback)
In this way, I am a proctor or guide in the students’ learning experience, and students are encouraged to used me as a resource.
Of course, every student is different, and their diverse learning styles imply that students grasp concepts in various ways at varying speeds. Another part of teaching is nurturing this diversity, being flexible, and encouraging creativity in the classroom; when students are allowed to be creative, and explore ideas in ways that align with their natural way of being, they are more engaged and confident learners. When I do my job correctly, students embrace the fact that their way of understanding/learning can and will be different from their peers, and it can also be perfectly valid. Adapting this way fosters a healthy, inclusive educational environment.
Finally, it is worth mentioning that I make it a point to humble myself as an instructor. I do my best to show students my excitement and passion for the material (I hope this rubs off onto students), but I also hope students see in me another life-long learner who just happens to have a bit more experience with the topic at hand. In short, there is a mutual respect that needs to happen between teacher and student that must be reciprocated in both directions.
goals for students #
I teach students in our graduate data science program with an overall goal to develop students who “have the know-how to make effective decisions based on data”. To this end, I focus on equipping students with
- the theoretical knowledge needed to explain their decisions (be it math, statistics, coding, etc.)
- the technical proficiency needed to gather, manipulate, and interpret data.
Most importantly though, I aim to do this in such a way that students cultivate a personal appreciation for curiosity and skepticism. Too often, students treat data science as a purely objective discipline—assuming that if a model performs well, it must be valid. This leads to a tendency to offload critical thinking to (and sometimes over-rely on) possibly flawed performance metrics, bypassing a deeper understanding of what their models are actually doing. To combat this, I challenge students to embrace the subjectivity of data science. Students should understand that these models they create are among many options someone could use as tools—not ultimate truths—and context should always be taken into consideration.
Lastly, I want students to carry a mindset that challenges the perceived authority of models. In professional settings, they will often be surrounded by peers or stakeholders who may not question results as rigorously. In these cases, I hope to give students the confidence to recognize they are the expert. They should feel empowered to question assumptions, challenge modeling decisions, and advocate for more thoughtful applications of data science. Ultimately, I want them to be not just skilled technicians but reflective data practitioners who understand their responsibility in shaping how data-driven decisions are made.
implementation and assessment #
Since I believe learning (and teaching, in a way) happens almost exclusively at the behest of the learner, I believe evaluation should follow suit. So, I choose not to use “grading” in the traditional way. For the most part, “grading” is completely replaced with “feedback”, and students are given the power to self-evaluate. In my courses, I employ flipped classrooms, which allows students to ingest course materials at their own pace, Ungrading (Susan D. Blum, 2020), which prioritizes iterative feedback and continual growth/improvement, and metacognitive reflection, which trains students to assess and improve on their personal learning process. Thus, instead of scores and grades, we have a framework where students are given multiple attempts to improve their work based on feedback from the instructor.
In the classroom, traditional “lecture” typically takes up the least amount of time. Using a flipped classroom model, students are expected to read through/watch weekly readings/videos on their own time before showing up to class. This gives us time for various learning activities depending on the class, such as peer instruction (Mazur, 1997) or in-class project group work. In classes where projects are more stimulating for students, I find that groups of 3-4 work just fine; otherwise, I tend to stick to pair groups. Not only does this improve student engagement with the course materials, it gives students the opportunity to interact with, teach, and learn from one another.
Every week, or after each assignment, students fill out a metacognitive report which allows them to share what they’ve learned, how they’ve learned, and what they’ve gotten wrong (and why). These help students improve on their learning strategies, but they are also great qualitative assessments for how I’m doing as an instructor; they help me understand if the material they’re getting is helping, and why (or why not). For example, a student pointed out one week that there was no aspect of the lecture, reading, or lab that helped them understand a particular topic in the homework assignment. So, I needed to reach out to them and create a new document for them. Though this was a successful intervention, it revealed an issue with the course content that needed to be improved.
I give students complete control over their final grades (within reason) through summative self-assessment (Nieminen, 2022). For this to work, I have students fill out three self-reflections (a la Jesse Stommel) at the beginning, middle, and end of the semester. These are notably more extensive than the weekly reports, and they serve a few primary purposes:
- In the Initial Self-Reflection, students describe in detail what it means to be an A-student, a B-student, etc., in the context of the class. At the end of the term, they evaluate their work, and make a determination based on these definitions. This initial self-reflection is also meant for students to document their prior understanding of the topics presented in the class, so they can see how they’ve improved at the end.
- As an example, I have already had a student ask for an “A”, even though they neglected to turn in several assignments. Their initial self-reflection described a much more diligent “A-student”, so it was clear they could not “give themself an A” for reasons of their own making!
- At the Midterm Self-Reflection, I ask students to evaluate their learning thus far, and share what about the course has been especially exciting or challenging for them. I summarize the results of the survey with the class. These reports can be helpful for students to “place” themselves within the class, giving them a sort of comparative measurement for how they’re doing so far.
- Depending on their response to the midterm self-reflection survey, I may require a one-on-one meeting with me to readjust course expectations for the remainder of the semester.
- Finally, in the Final Self-Reflection, I hear from students about their learning experience in the class, and how their work lives up to the grade definitions they gave at the beginning of the term. Not only does this reflection determine their final grade, but it gives students power over what they earned.
- I also use this as an opportunity to have students give advice to the next cohort of students. This helps me learn what I can improve, but it also gives the next group of students useful advice from a true peer!
To measure student performance on assignments, I use an adaptation of the EMRF rubric (Stutzman & Race, 2004 and subsequently Talbert, 2021), and employ a version of specifications grading (Nilson, 2023) that deems submissions complete or incomplete, with the opportunity to make adjustments based on feedback, and resubmit. This is formative assessment mechanism that students can use to correct themselves week-to-week. In addition to this, students give themselves a self-evaluation of their work, and compare it to the one given by a TA/instructor with the aim of validating their learning. For course projects, these self-evaluations will also be compared to peer evaluations (Yan & Carless, 2022). When I share the rubric, I also share the following statement: “You make the changes you want to — we’re just here to give our advice, and to help where we can. After all, you help decide your grade at the end of the term.” In this way, they see that they will receive feedback on their work, but they should choose to take in our input as they like; this should support their intrinsic motivation for learning. It should also nurture students’ abilities to self-evaluate their own learning as life-long learners.
All assignments and projects I give students involve an aspect of “explaining in your own words”. This requires students to channel their inner experience in their work, and it challenges them to understand the material well enough to do so. To help this along, any time a project involves a dataset, students are expected to choose a data set (which meets certain requirements) that they are passionate about. I express to students that their passion is what will drive them to work harder and learn more effectively.
Lastly, when students are struggling (in such a way that they might not be able to catch up), I need to meet with them individually, and allow them to speak freely. In my experience, the vast majority of these cases arise when a student is having personal issues outside of the class. Once we identify the issue, I might forward them to resources on campus, or talk about alternative ways for them to complete the class based on their situation. In the most extreme cases, I will assign the student an Incomplete grade with the time to rectify their missing work after the semester is through (i.e., more time, less stress).
ongoing progress #
Since my first semester at Indiana University – Indianapolis, my teaching has seen a number of significant improvements. To name a few:
- A more even (and reasonable) course grade distributions
- Better implementation of research-backed pedagogical techniques
- Improved course materials that allow for a flipped classroom model
- (For my math class) practical applications of mathematics that students can interact with each week
- “Grading” policies that better align with empowering students
That said, I recognize that there are aspects of my instruction that could use some improvement.
- Right now, the best evidence I have of my instruction comes from the creative products that students contribute to their assignments. To provide a more quantitative measure of student learning I will …
- incorporate a red-yellow-green indicator in the weekly metacognitive reports to measure student comfort with the material. Watching this over time should indicate how well students are adjusting to the concepts they’re learning.
- build a small pre/post “exam” for students to complete before and after courses. This should give me a gauge for what is working, and what is not.
- In general, weekly assignments are submitted separately from the weekly metacognitive report. To improve these week-to-week self-evaluations, I’ll make the assignment tied to the metacognitive report in the same “assignment” in Canvas (our LMS). This way, each resubmission will be more explicitly tied to a report response.
- As a very long term goal, I’d like to overhaul most of my course assignments to better integrate AI models. For example, I might ask students to “Use an LLM to <do something>. Share your prompt, what model you used, and improve on the result. Explain what you improved and why.” (this particular idea inspired by Bowen & Watson, 2024).
a note on trust #
AI is here.1 And, with it, comes the opportunity for students to offload their coursework to models which can “do it for them”. This threatens their ability to learn while in school, and I can see why many educators take it upon themselves to try and maintain a strict grasp on how students use AI (if at all), ensuring a class of students who are learning more genuinely. The first issue with this attitude is it takes away a student’s freedom to use these models in a novel way that allows them (specifically) to learn in their own unique way. That is, if I say “you can only use AI <like this>”, but the best way for student X to learn turns out to be using it <like that>, then I have done them a disservice by depriving them of a valuable learning tool. The second issue is that it presupposes students will act dishonestly, without giving them the opportunity to prove themselves otherwise. This violates the mutual respect for students that should be held on the side of the educator.
I, personally, will always give the student the benefit of any doubt. Maybe it’s just my personality, but I typically don’t care to investigate whether a student is lying to me, about anything — I just care about what they show me, and how I can help. If something seems fishy, there is likely a good reason, and I will approach the student with this mentality when I ask about it. My thinking is that I cannot take it upon myself to teach ethical behavior and teach them some subject; I can only do my best with the latter, and with the time I have, I need to be a good example, and allow for some risk and flexibility with the former.
references #
Augustyniak, R. A., Ables, A. Z., Guilford, P., Lujan, H. L., Cortright, R. N., & DiCarlo, S. E. (2016). Intrinsic motivation: An overlooked component for student success. Advances in Physiology Education, 40(4), 465–466. https://doi.org/10.1152/advan.00072.2016
Bowen, J. A., & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning. Johns Hopkins University Press.
Susan D. Blum. (2020). Ungrading: Why Rating Students Undermines Learning (and What to Do Instead): Vol. First edition. West Virginia University Press. https://library.indianapolis.iu.edu/cgi-bin/proxy.pl?url=https://search.ebscohost.com/login.aspx?direct=true&db=nlebk&AN=2640879&site=ehost-live
Dinsmore, D. L., & Alexander, P. A. (2012). A Critical Discussion of Deep and Surface Processing: What It Means, How It Is Measured, the Role of Context, and Model Specification. Educational Psychology Review, 24(4), 499–567. https://doi.org/10.1007/s10648-012-9198-7
Mazur, E. (1997). Peer instruction: A user’s manual. Prentice Hall.
Nieminen, J. H. (2022). Disrupting the power relations of grading in higher education through summative self-assessment. Teaching in Higher Education, 27(7), 892–907. https://doi-org.proxy.ulib.uits.iu.edu/10.1080/13562517.2020.1753687
Nilson, L. B., & Stanny, C. J. (2023). Specifications grading: Restoring rigor, motivating students, and saving faculty time (2nd ed.). Routledge. https://doi.org/10.4324/9781003447061
Stommel, J. (2020, February 6). Ungrading: An FAQ. Jesse Stommel. https://www.jessestommel.com/ungrading-an-faq/
Stutzman, R. Y., & Race, K. H. (2004). EMRF: Everyday Rubric Grading. The Mathematics Teacher, 97(1), 34–39. https://doi.org/10.5951/MT.97.1.0034
Talbert, R. (2021, January 11). EMRN: A framework for giving feedback on student work. Robert Talbert, Ph.D. https://rtalbert.org/emrn/
Yan, Z., & and Carless, D. (2022). Self-assessment is about more than self: The enabling role of feedback literacy. Assessment & Evaluation in Higher Education, 47(7), 1116–1128. https://doi.org/10.1080/02602938.2021.2001431
Updated on April 4th, 2025.
Until recently, I’ve eschewed the term “Artificial Intelligence” or “AI” to describe extant models such as Large Language Models (LLMs), Generative Pre-trained Transformers (GPTs), and other generative models, and in a way, I still do. I believed the term reflects an unrealistic, etherial ideal that can only be idealized in the media through science-fiction, etc. In truth, these models are nothing short of applied statistics and linear algebra on a colossal scale. However, the more I reflect on the matter, the more I realize that our society has chosen this sort of modeling as the defining basis for what we know of as “AI”. So, though I may have held out hope for something better, alas, this is where we stand. ↩︎