The general belief is that interactive content improves engagement. The truth is, it depends on the learner profile. Factors that define the learner profile include age, gender, education, region, industry, experience, likes and dislikes, beliefs, etc. All these factors must be taken into account when conducting a training session or developing an e-learning course. A trainer or an instructional designer determines the amount of interaction needed in an e-learning course.
Common interactive elements used in e-learning courses are Hot-spots, Click-reveal, Drag and drop, Quiz / Assessment, Games and Simulation. Click-reveal is easy to use, while simulations are complex. Complex interactions require higher learner attention because the chances of making an error is higher. We could use this measure as an indicator of complexity, i.e., calculating the odds of a learner performing an incorrect action, in other words, the probability of making an error.
It is pretty easy to apply this to assessment questions. The probability of making an error is simply the number of possible incorrect responses (IR) over the total possible responses (TR). Do note that incorrect responses is nothing but total responses minus correct responses (CR).
In case of a True or False question type:
TR (T, F) = 2
CR = 1
IR = 1
So, the probability of making an error is 0.5 (ie, IR / TR).
In case of an MCQ having 4 options with only 1 correct option and the learner is aware of that:
TR (A, B, C, D) = 4
CR = 1
IR = 3
So, the probability of making an error is 0.75 (ie, IR / TR).
Similarly, for an MCQ having 4 options with more than one correct option and the learner is aware of the same (Multiple Response):
TR (AB, AC, AD, BC, BD, CD, ABC, ABD, ACD, BCD, ABCD) = 11
CR = 1
IR = 10
So, the probability of making an error is approximately 0.91 (ie, IR / TR).
Although this does not take into account the academic complexity of the question, it still defines the structural complexity of the question. Let's call this number the Structural Complexity Index or simply the Complexity Index. The Complexity Index is high in a fill-in-the-blanks question and higher in a Subjective type question because the possibility of writing an incorrect answer is much higher.
Sequencing / Ordering / Ranking:
An under-utilised type of question is Ordering. Ordering 3 items can be done in 3! ways. And so, its Complexity Index will be 0.83. When there are 4 items to order, the Complexity Index rises to 0.958. The items that need to be ordered can be plain words or even sentences.
Adjusting the Complexity Index:
From the above examples it may appear that a True or False question is the easiest to answer since it has the least Complexity Index. However, it is possible to have questions where the complexity is lesser than that. To do that, we could create an MCQ with 3 correct options out of say 5 options where answering any one of the 3 rewards full points. This reduces the Complexity Index to 0.4, (i.e., 2 out of 5). In the same way, if we changed the total options to 4, we have reduced it to 0.25 (i.e., 1 out of 4).
Who does it benefit:
One may wonder if there is really a need to frame a question where it is easier to answer right than wrong. This brings us back to the initial part of this discussion, i.e., what is the learner profile for the course that is being built?
If most of the learners in the group lack motivation to learn, it may be a good idea to reward them for providing responses that are otherwise considered trivial. They will then be more willing to come back to learn and try out greater challenges.
Complexity Index of a quiz:
Since the Complexity Index of a question is a probability number, the Complexity Index of the entire quiz is simply the product of the individual indices.
So, a quiz of 4 True or False questions would be 0.0625 (0.5 x 0.5 x 0.5 x 0.5) whereas a quiz of 3 MCQ questions and 1 True or False question would be 0.2109 (0.75 x 0.75 x 0.75 x 0.5). The numbers clearly tell us which is a harder quiz.
To reiterate, these are only structural numbers and is not a measure of the academic complexity of the subject. We assume that all questions have the same academic complexity, i.e., it is not a mixture of easy questions with HOTS (Higher Order Thinking Skills) questions and that a learner will perceive the questions of having the same level of difficulty.
Measuring the Complexity Index for a course:
Just like how we computed for a quiz, if we are somehow able to define the parameters of an entire course based on the type of content and interactions in it, we will be able to arrive at the Complexity Index of a full course. The current naming convention of complexity levels is Passive Learning (a.k.a Page-Turner) (L1), Limited interactivity (L2), Complex Interactivity (L3) and Simulated (L4). A single measurable index (or a set of indices) that define the complexity of a course has several benefits:
- Less ambiguous: Put an end to the confusion if a course is of one level or the other.
- Mapping of learners to complexity: Determine with higher accuracy what Complexity Index works best for a specific learner profile through research and continuous improvement. What can be measured, can be improved. If the complexity of a course can be measured, research can be conducted on it and improvements (adjustments) can be made.
- Easy to build: Makes it easy to plan, build and track the development of courses