From Gut Instinct to Data Points: Translating Classroom Assessment Wisdom for the Digital Learning Age
- David Schachter
- Aug 19
- 6 min read
Throughout my teaching career, I developed what might be called ‘intuitive learning analytics’ — the ability to recognize patterns in student work, behavior, and engagement that revealed deeper truths about their learning progress than any single test score could capture. When I noticed a student’s writing becoming more tentative after attempting a challenging concept, or recognized the subtle shift in sentence structure that indicated growing confidence, I was essentially conducting qualitative data analysis in real-time.
Even more fascinating was my growing ability to sense the deeper metacognitive patterns at work. In my final years of teaching, I could detect when a student was taking endless time not because they lacked understanding, but because their concept of perfection was paralyzing their progress. These students needed to learn trust in the iterative process rather than pursuit of an impossible first-draft perfection.
This intuitive assessment led to one of my most profound discoveries about learning patterns. I began recognizing how students approached writing through the lens of their extracurricular passions. One young man, who initially produced abysmal drafts but showed steady improvement through iterations, revealed that his grandfather had taught him to carve wooden figures from blocks of wood. When I asked him to describe his carving process, I realized he was applying that same methodical, iterative approach to writing — slowly revealing the form within the material through careful, patient work. Once I realized this I could picture in my head how his writing gradually carved away the unnecessary writing and smoothed out the rough parts.
Similarly, I had a young woman who was a violinist, practicing alone in her room. Her writing process mirrored her musical practice: quiet, reflective, building complexity through repetition and refinement. Both students had been labeled as poor writers for ten years because their non-standard, prolonged processes were misunderstood by previous teachers who focused on speed, immediate results, and following what they thought of as the ‘normal’ process rather than recognizing different pathways to mastery.
These experiences revealed something crucial about assessment: there’s still an irreplaceable place for human intuition combined with metacognitive awareness that algorithms simply cannot account for. Modern learning analytics platforms now attempt to formalize and scale this pattern recognition through systematic data collection and algorithmic analysis. Where I once relied on my trained eye to spot emerging confidence in a student’s willingness to attempt complex arguments, systems now track revision patterns, time-on-task metrics, and linguistic complexity indicators to identify similar growth signals.
Yet these systems face significant interpretive challenges. They might detect that a student is spending more time on assignments, but they struggle to discern whether this represents increased engagement, a unique information processing style, or growing confusion. They can measure linguistic complexity, but they can’t recognize when a student’s shift toward simpler language actually represents clearer thinking rather than regression. Are students attempting to be concise but not being precise, or do they fundamentally misunderstand the assignment?
I recall one student who wrote a brilliantly crafted essay that completely missed the assignment’s essential question. This student needed conceptual coaching, not remediation on writing methods or processes. Would an algorithm recognize this distinction, or would it simply signal failure and send the student down an inappropriate remedial path?
The most effective approach I developed balanced quantitative measurement with rich qualitative feedback. I would give students grades based on clear rubrics — largely quantitative assessments — but pair these with deep, personalized feedback. If a student wrote something that required me several seconds to decode, I might respond: “I wonder if saying it like this or this would get your point across more effectively. I understand what you’re trying to say, but it took me a bit to figure it out. This disrupted my flow and I struggled to reconnect with your other ideas.”
Interestingly, this process mirrors how my AI writing partner works with me now. When I ask whether this reflects an algorithm understanding my thinking through prolonged use, or simply represents a logical approach to teaching, it acknowledges uncertainty — it could be either or both. This uncertainty points to something important: the most effective assessment approaches, whether human or algorithmic, create space for exploration rather than rushing to definitive conclusions.
My most accurate assessments always integrated multiple data sources — not just written work, but participation patterns, peer interactions, willingness to take risks, and responses to challenges. I was conducting what we might now call ‘multi-modal learning analytics,’ but with crucial human interpretation that gave meaning to the data patterns. Modern systems attempt similar integration through comprehensive dashboards, but they often struggle to weight different data types appropriately or understand the cultural and personal contexts that transform raw patterns into meaningful insights.
This challenge extends beyond technical limitations to fundamental questions about learning and growth. My iterative approach to teaching writing centered on ongoing formative assessment that helped students embrace revision as part of mastery rather than evidence of failure. The formative process creates growth; summative assessment measures its level. This principle applies equally in professional settings — whether developing oneself as an employee, managing others, or designing educational materials. Having employees and managers view learning as an iterative series of formative steps leading to mastery maintains engagement and promotes growth mindset rather than the fixed mindset that measurement-focused approaches often reinforce.
This distinction between growth and fixed mindset, pioneered by Carol Dweck over the past forty years, reveals why human-led approaches to learning remain vital. Students with low self-perception in English often interpreted poor grades as confirmation that they simply couldn’t improve — which is precisely when human intervention becomes crucial. Sometimes the real lesson involves helping students adjust their mindset before learning can happen, a nuanced intervention that requires emotional intelligence and cultural awareness that current AI systems struggle to provide.
One reason I find it challenging to unpack, recognize, and articulate these concepts now that I’m out of the classroom relates to the scope and pace of teaching. Educators rarely have space and time to discuss these practices, understand our own metacognitive processes, or learn collaboratively from one another. Despite mandatory planning time, most teachers feel overwhelmed by student needs, administrative paperwork, coverage duties, and endless additional responsibilities. Reserving these insights to private intuition becomes the norm rather than the exception.
However, this challenge also points toward tremendous potential for AI partnership in education. If educators, managers, and instructional designers can integrate AI learning tools thoughtfully, these systems can become partners that streamline routine tasks, automate time-consuming processes, and create space for us to focus on the uniquely human elements of teaching and learning.
The key lies in approaching AI interactions as opportunities for mutual exploration, maintaining critical thinking while remaining open to possibilities, and focusing on augmenting rather than replacing human expertise. Just as effective educators use Socratic questioning to draw out student knowledge and connections through iterative assessment, AI partnerships can help educators articulate and systematize the brilliant intuitive practices we’ve developed through experience.
My approach to assessment was fundamentally diagnostic — I was less interested in assigning grades than in understanding what each student needed to grow. When reviewing essays, I was conducting qualitative data analysis, looking for patterns that revealed conceptual misunderstandings, skill gaps, or readiness for new challenges. Modern learning analytics formalize this diagnostic approach through systematic data collection and pattern analysis, but they often miss the contextual clues that informed my most valuable assessments.
This is where partnership becomes powerful. Every assessment I conducted was designed to answer specific questions about my teaching and my students’ learning. Patterns across multiple students informed whole-class instruction decisions; individual patterns guided personalized interventions. Modern learning analytics formalize this decision-making process through automated recommendation systems, but they often lack the pedagogical content knowledge that helped me translate assessment data into effective instructional strategies.
Yet through thoughtful collaboration, AI systems can help educators become even more effective. Rather than replacing human judgment, these tools can help us recognize patterns we might miss, suggest interventions we hadn’t considered, and document approaches that work so they can be refined and shared. The goal isn’t to automate teaching but to create space for the deeply human work of connection, inspiration, and growth that transforms learning from measurement into genuine development.
The most effective future of educational assessment will likely combine the pattern recognition capabilities of learning analytics with the contextual interpretation skills of experienced educators. This partnership approach recognizes that both human intuition and algorithmic analysis bring unique strengths to understanding learning, and that their combination can create assessment systems that are both systematic and deeply responsive to human complexity.
As I transition from classroom teaching to instructional design, I’m learning to recalibrate these principles for adult learners while maintaining the core insight that assessment should drive growth rather than simply measure performance. Whether working with employees, managers, or designing educational materials, the fundamental truth remains: effective learning happens through iterative processes that balance structure with personalization, measurement with meaning, and technological capability with human wisdom.




Comments