Humanizing adaptive learning for ELT

March 17th, 2014
by


Part 1:  Knewton, adaptive learning, and ELT
Part 2:  Open platforms and teacher-driven adaptive learning

The debate over adaptive learning at eltjamPhilip Kerr’s blog, and Nicola Prentis’ Simple English has been both fascinating and instructive, not only due to the posts but also the great dialogue in the comments. It’s a fun topic because it involves our core beliefs regarding language acquisition, effective teaching, and the roles that technology can play.

That said, I can’t help but feel that in some respects we’re thinking about adaptive learning in a limited way, and that this limited perspective, combined with Knewton confusion, is distorting how we approach the topic, making it into a bigger deal than it really is. But, given the potential power that the new “adaptive learning” technology may indeed have, we do need to see clearly how it can help our teaching, and where it can potentially go wrong.

Adaptive learning in context
I wrote “adaptive learning” in scare quotes above because I think the name itself is misleading. First, in a very important way, all learning is adaptive learning, so the phrase itself is redundant.  Second, the learning, which is carried out by the learner, is not what the vendors provide: “Knewton…constantly mines student performance data, responding in real time to a student’s activity on the system. Upon completion of a given activity, the system directs the student to the next activity.” That is not adaptive learning, but rather adaptive content; it is the content sequence (of “activities”) that adapts to the learner’s past performance. We can call adaptive content “micro-adaptation”, since it happens at a very granular level.

Now, good teachers have been adapting to our students for how long…millennia? We assign homework based on what we know of our students’ strengths and weaknesses (adaptive content).  In the communicative classroom, we are always adjusting our pacing, creating new activities, or supporting spontaneous discussion based on our perception of the students’ needs in that moment (the adaptive classroom).  Dogme is one kind of adaptive learning in the classroom. And, when the stars align, educators can successfully design and deploy a curriculum, including methods and approaches, that iteratively adapts to student needs over time (the adaptive curriculum). We can call the adaptive curriculum “macro-adaptation”.

Screen Shot 2014-03-17 at 2.16.58 PM

So how does the new, algorithmic adaptive learning, such as Knewton helps deliver, address each of these categories?

+ As we saw above, the content level is where Knewton focuses, and it’s limited to task level online content that can be objectively scored (micro-adaptation). But, it can do amazing things with this limited data, especially when the data is aggregated (“big data”).  Knewton can change the activity sequence in real time to better fit the student’s performance, and can then make statistical inferences about the quality of specific activities and sequences of activities.

+ For the classroom, students would need tablets or smartphones in order to input the data that Knewton needs. I can think of some very cool pairwork and groupwork tasks involving tablet-based activities, but these aren’t individualized and so would be out of Knewton’s scope. Presumably the student data can only be created by individual tasks, which would severely limit its utility in a communicative classroom. However, the content level input resulting from student online work (e.g. homework, or from a blended course) could be valuable for teachers to have and could help optimize classroom lesson planning.

+ For the curriculum category, algorithmic adaptive learning can analyze the student performance data resulting from the content level, and then deliver insights that can potentially be fed into the curriculum, helping certain aspects if the curriculum iterate and adapt over time (there are limitations here that are discussed below).

So as a tool, Knewton has potential for the ELT profession. But, whether the tool is used appropriately, or not, does not depend on Knewton, but rather on the publishing partners that use the Knewton’s tools. All Knewton does is provide publisher LMSs with a hook into the Knewton data infrastructure.  Knewton is a utility. It’s the publishers that decide how to best design courses that use Knewton in a way that is pedagogically appropriate and leverages Knewton’s strengths to provide adaptive content, classrooms, and curricula. It is the publishers that must understand Knewton’s limitations. As a tool, it can’t do everything – it can’t “take over” language learning, or relegate teachers to obsolescence, although Knewton’s marketing hyperbole might make one think that.

Limitations for ELT

If Knewton’s ambition is one concern, then another is that it is not specifically designed for ELT and SLA, and therefore may not understand its own limitations. Knewton asks its publishing partners to organize their courses into a “knowledge graph” where content is mapped to an analyzable form that consists of the smallest meaningful chunks (called “concepts”), organized as prerequisites to specific learning goals.  You can see here the influence of general learning theory and not SLA/ELT, but let’s not concern ourselves with nomenclature and just call their “knowledge graph” an “acquisition graph”, and call “concepts” anything else at all, say…“items”.  Basically our acquisition graph could be something like the CEFR, and the items are the specifications in a completed English Profile project that detail the grammar, lexis, and functions necessary for each of the can-do’s in the CEFR.  Now, even though this is a somewhat plausible scenario, it opens Knewton up to several objections, foremost the degree of granularity and linearity.

A common criticism of Knewton is that language and language teaching cannot be “broken down into atomised parts”, but that it can only be looked at as a complex, dynamic system.  This touches on the grammar McNugget argument, and I’m sympathetic. But the reality is that language can indeed be broken down into parts, and that this analytic simplification is essential to teaching it. Language should not only be taught this way, and of course we need to always emphasize the communicative whole rather than the parts, and use meaning to teach form.  But to invalidate Knewton because it uses its algorithms on discrete activities is to misunderstand the problem. Discrete activities are essential, in their place. The real problem that Knewton faces in ELT is that both the real-time activity sequencing, and the big data insights delivered that are based on these activity sequences, are less valuable and could be misleading.

They are less valuable in ELT for at least two reasons.  First, they are less valuable because these big data insights come from a limited subset of activities, and much student-produced language and learning data is not captured.

Second, they are less valuable because language learning is less linear than other, general learning domains (e.g. biology, maths). Unlike in general learning domains, most language students are exposed to ungraded, authentic, acquirable language (by their teacher, by the media, etc.) that represents an approximate entirety of what is to be learned. Algebra students are not exposed to advanced calculus on an ongoing basis, and if an algebra student were exposed to calculus, the student wouldn’t be able to “acquire” calculus the way humans can acquire language.  Therefore, for ELT, the cause-effect relationship of Knewton’s acquisition graph and the map of prerequisite items is to some extent invalidated, because the student may have acquired a prerequisite item by, say, listening to a song in English the night before, not by completing a sequence of items. That won’t happen in algebra.

Because of these limitations, Knewton will need to adapt their model considerably if it is to reach its potential in the ELT field.  They have a good team with some talented ELT professionals on it, who are already qualifying some of the stock Knewton phraseology (viz. Knewton’s Sally Searby emphasising that, for ELT, knowledge graphing needs to be “much more nuanced” in the last Knewton response on eltjam).  And, hopefully Knewton’s publishing partners will design courses with acquisition graphs that align with pedagogic reality and recognize these inherent limitations.

Meanwhile, as they work to overcome these limitations, where can Knewton best add value to publisher courses? I would guess that some useful courses can be published straightaway for certain self-study material, some online homework, and exam prep – anywhere the language is fairly defined and the content more amenable to algorithmic micro-adaptation. Then we can see how it goes.

In Part 2 of this post we’ll focus on the primary purpose of adaptive learning, personalization, and how this can be achieved by Knewton as well as by teacher-driven adaptive learning using open platforms.  As Nicola points out, we need to start with “why”….

No Comments

Leave a Reply