Here are some new features which have recently been released to help you develop and deliver your blended learning courses.
- You can create tests by enabling or disabling the “clear answers” function – this means learners can only submit their work once and the activity you create can be a test. Note that you can choose this setting at page or course level simply select “Allow” or “Do not allow” multiple attempts on course settings or when you publish a page. Do you allow learners to resubmit work or do an exercise again? What do you think is the best balance for self-study activities?
- You can now hide a folder if you are developing learning material within it and you are not ready to use the tasks or exercises with learners. If a folder contains only draft pages (i.e. there are no published items) then the folder will not show up for your learners – learners can happily get on with the activities that you have published while you prepare the upcoming tasks in private. Note: any co-moderators of the course will still be able to view the folder. Do you find that most of your courses are designed as you go along, to allow for a more flexible training program?
- The “open essay” item now has a toolbar! The rich text editor allows learners to add colour and different font types. Learners can also highlight words, or add an audio or image file from their hard disk. They can also add hyperlinks or videos, making essay submissions much richer, and appealing to a wider range of learning styles. Here’s an example:
There are many other small enhancements we have made, to make your experience smoother, and we will be rolling out some additional features soon. Hope you are enjoying the platform.
Joe McVeigh‘s Intro to TESOL course put together a great slang dictionary last month. Slang is a bit like IT vocab – some of it gets obsolete quickly – so this is a nice example of what’s current in US universities. It also has audio examples. Follow the links off the post.
And in case you haven’t checked out Joe’s site, there are some great resources there – he’s the real deal – including some nice needs assessment stuff.
Several weeks ago we were discussing testing and evaluation (me, Marco Polo 1, 2, and Aaron at Teacher in Development). Basically the discussion revolved around whether it was evaluation itself or poorly-implemented evalution that smothers learning, and I think that, after much writing and explaining, we all realized that basically we all agreed (kind of).
Evaluation is part of the teaching and learning process. A good grade is certainly desirable, but if our teaching/learning processes have been well thought out, learners who are competent should know they will do well. By the time a learner is finished a “courses”, she should know where she is in terms of grades. As an instructor, I should provide continual feedback against which a learner can sharpen and measure his/her own thinking. The evaluation outcome should not be a surprise to the learner. Unfortunately, we make the grades the focus (instead of the learning), and our learners think that the reason they are taking our courses is to get a certain grade. In reality, the focus of evaluation is to ensure that a learner has a framework upon which she/he can build and function within a field or within society as a whole. The grade, while mandated, is really one of the least valuable parts of the entire learning process.
I like how throughout the post Siemens navigates (carefully) between the need for feedback and where its limits lie, and the opportunities for learner-defined paths and measuring pre-established outcomes. I think this is exactly the tightrope we have to find our balance on (although maybe “the” answer isn’t found on a dualistic tightrope but rather in a third way). How to actually implement this is the key of course. I would love to see some action research or case studies that exemplify the concepts that are relatively easy to blog about but really tough to put into practice, especially in institutions. If anyone can steer me towards some linkage it’d be great.
So I’m feeling guilty for browsing through BoingBoing in the middle of the work day and, lo and behold, they provide a wonderful rationalization: an interesting article on some research that purports to show that repeated testing improves learning (well, memory).
Reading through the article, it makes intuitive sense: frequent feedback on what is retained improves retention. It also fits my personal learning style. And it fits with what I do as a language teacher: “tests” that are so frequent that they cease to be thought of as tests. This gives everyone great feedback and improves performance as well as metacognitive practice.
But…of course testing and measurement are pariahs in the edutech community now. I don’t work in the institutionalized public sector, so I can’t say how bad things are, testing-wise, in the school systems, but I assume it’s horrible and that explains why everyone is so against testing and measurement.
But I can’t help feeling that anti-test bias is confusing goals and means. It’s the way testing is carried out that is the problem, not testing itself. You have to give feedback on performance to learners – how can you do that without some way of assessing what their performance is? You can have learners help decide what the assessment schema is, given their goals, but even with dreaded standardized testing, it’s useful feedback.
The much-maligned TOEIC is a good example: I know it doesn’t measure production, and the accents are (were?) all from unusually bland US speakers. Yet with TOEIC results there is a correlation with overall language ability. I did linguistic auditing for a big multinational for many years and as corporate policy the TOEIC was used as an initial placement tool. And it actually worked very well – we coupled it with an oral interview and the TOEIC score was usually pretty much right on with what we arrived at after the 30′ interview. So in my experience there is correlation. I’d be interested in any studies on this if anyone knows of any.
Of course you can’t use the TOEIC exclusively, especially as a progress test. That would be an invalid use of the TOEIC tool. You need to use it as part of the overall assessment suite, together with other tools (self-assessment, teacher assessment, performance reviews, portfolios, etc.) But to say “the TOEIC sucks” is like complaining about a screwdriver because it won’t pound in a nail very well (or build a house).
From what I read, granted, institutionalized education is abusing testing tools as well (tests as hammers?). But that doesn’t mean that teachers don’t need to think about how to measure if what we are doing works or not. We are all enthused about new ways to teach and learn – why? Because these new ways help people learn about the new world and how to interact with it, better than previous ways? I assume that as teachers we are enthusiastic about new techniques because they are in some way “better”, but that very comparison is an implicit measurement. We are all measuring all the time, albeit often subconciously and subjectively. So instead of this “hidden” measurement, let’s just acknowledge that we measure and just try to find the best ways to do it.
Two requests: first, I admit that I just “don’t get it” when it comes to the anti-measurement thing. If anyone would like to help me out with some links, I’d appreciate it. Second, amazingly, I found myself disagreeing with every sentence in a Stephen Downes post this morning (I actually went in and checked – I have a problem with every sentence, except the sentence “Sheesh.” which was OK). What am I missing? Downes is an idol – what gives? What are the “terms of success” that Downes would establish? Again, any linkage anyone could send would be great. Sorry for being clueless.