E-Learning Design Principles
In E-Learning Design Principles taught by Professor Ken Koedinger, my team and I designed and implemented an online course for learning American Sign Language (ASL). Specifically, our instruction aimed to teach learners how to recognize and translate to English five basic greetings and phrases, the different gesture aspects of a sign, an introduction to deaf culture. We conducted an A/B test that varied the instruction based on the temporal contiguity learning principle and found no significant difference between the two groups. The result may be due to our small sample size or a violation of the redundancy principle. For future work, we would like to add instruction to teach learners how to sign and allow for immediate feedback based on the correctness of their signing.
When students consider learning a new language, they typically don’t think of learning sign language. But there are many reasons why knowing ASL can be beneficial. ASL has its own grammar and vocabulary different from those of English. Even though there is no speaking involved, it makes use of the eyes, hands, face, and body. In addition, over 28 million Americans are considered deaf. Deaf communities are present throughout the United States and the world, and they have their own culture and histories that are worth learning about. An ASL online course would help bring awareness to deaf culture and introduce people to a new form of expression to communicate with an entire population.
Although there are many online resources that teach ASL, unfortunately many of these are not useful for a novice. A majority of tutorials found online are actually inaccurate, and end up being examples of what not to do. In addition, access to these resources and ASL courses are often costly and inefficient. With an online course to teach students sign language, these resources can be made available to the public with real-time feedback and practice.
Our initial scope was too wide; we wanted not only to teach people how to recognize sign language, but also to learn how to sign and to fingerspell. Our initial problems involved creating assessments to verify whether students were signing correctly, which would be too time-consuming and outside the scope of this course. After conducting an expert interview, we learned what would be feasible to teach in the time frame that we had and narrowed down our scope to our current instructional goals.
We set out the following learning objectives:
Deaf Culture and Customs
Students should be able to identify 3–5 differences between signed English, ASL, and fingerspelling.
Students should recognize that there is a difference between the sign language of different cultures.
Students should recognize the difference between big d and small d deaf.
Students should be able to evaluate and recognize 2–4 reasons for providing accommodations for deaf people.
Students should be able to identify the 5 parameters of a sign.
Students should be able to identify the non-manual markers of sign language.
Given prompting, students should be able to recall the meanings of five basic greetings in ASL.
In our A/B testing we used four forms: form A with no captions in animations, form A with flipped assessments, form B with the captions included in animations, and form B with flipped assessments. We also used random assignment to determine which form to use for user testing, for which we had a sample size of 20.
A comparison of our data with and without the condition showed no significance of pre-to-post test scores. On average, learners performed only slightly better on the captions condition, but the results are still inconclusive due to small sample size.
Our innovative principle was the use of temporal contiguity, which involves combining the translated English text with the ASL sign animations. The purpose of combining text and graphics was based on Mayer’s (2001) cognitive theory of multimedia learning. By allowing both verbal and visual mental representations of the information to be held concurrently in a limited capacity working memory, allowing referential connections between elements to be constructed in working memory and encoded in long-term memory.
One possible explanation of our poor improvement from using temporal contiguity was the violation of the redundancy principle by overloading the visual channel and increasing extraneous cognitive load. Furthermore, cognitive load theory (CLT) provides an explanation for the split-attention effect. The limited cognitive resources in the visual channel were shared in processing both the animation and the printed text, so the meaning of the sign shown in the animation may not have been selected and organized into a mental representation.
We gathered feedback from our participants and students in the course. An interesting point that was made in our feedback was that the animations are not mirrored, so novices may not have understood that the same sign can be flipped or mirrored, and retain the original meaning. In the future, we would include that information in the instruction. It was also noted that the use of temporal contiguity may have been more effective on students who had more experience with ASL.
We also received feedback that the questions on our assessments were too easy, and that the students were able to guess the answer without having prior experience or knowledge. Another point that was made was that “the size and alignment of the gifs may be a little confusing...it might be good to explore some more options to layout the questions and options.” These adjustments could be made in future iterations of this work.
To further address our findings in our future work, we would want to conduct a difficulty factors assessment in order to improve our assessment questions. Since there weren’t noticeable differences in the pre-to-post test learning gains, we would like to see how we could improve our assessment questions.
We would also like to include more instruction to actually teach students how to sign, and include interactive elements that would allow students to practice their signing and receive real-time, automatic feedback from the system or practicing conversations online with other ASL students or experts, which would allow us to measure their engagement beyond post-test scores. In our future work, we plan to use deep learning to recognize the student’s signs and provide automatic feedback to the students to provide refinement and induction. We also plan to use additional techniques to gain information about the engagement with the learning activity, such as eye-tracking and time spent on formative assessments in our next iterations.