AssessmentofOLExperiences

__**Assessment of Open Learning Experiences**__

=Open Learning Chapter 13: Assessment of Open learning experiences.=

Introduction
Aided by technology, Open Learning is a concept with transformative power. As more of the world’s population requires education, so too more education is available through online resources than ever before. The transactional barriers of access are being broken down to the point where a person only requires an internet connection and a need or desire for learning, and the world is their proverbial oyster. Open learning provides the information and resources needed to gather information related to a task requirement, but how do students know they have learned the material? How do open learning organizations test for understanding, given that almost no staff resources can be allocated to the assessment process? This chapter will discuss how traditional learning is assessed, how open learning resources are currently assessed, and what assessment requirements are unique to open learning.

Open learning and open education are ideas with many definitions, but for this chapter, we will stick with the most basic definition: Open Learning is a process that strives to provide educational resources to potential learners free of charge. Open Courseware is the product of open learning, in that it provides the courseware to support learning free of charge. The limitations will be discussed in other chapters, but this chapter’s focus is on assessment of open learning, so we will limit our definition so as to not create confusion or debate on semantics. Traditional learning should be defined as what we know, to include distance learning, but largely the learning process involving cohort, instructors, classrooms, universities, degrees, and interaction between faculty, students, and other students.

Assessment is a contentious topic in the world of education. How do you best assess learning? What types of assessment are best suited to different learning contexts and individual learners? How do instructors balance assessment with learning objectives? This chapter will address assessment of traditional learning environments, current assessment of open learning environments, and direction for improvement of assessment in open learning environments for personal knowledge gain, and potential paths to assessment for certification of open learning experiences.

What is Assessment?
Why do we assess learners? The answer might surprise those not intimate with the educational design process. Assessment is used as much as a determination of the efficacy of the learning material and instruction as it is a judgment of a student’s ability to recall and apply the information. Instructors and institutions use assessment to determine whether their learning objectives are being met through the teaching methods and materials provided to students. Assessment takes two basic forms: formative, and summative. Formative assessment is conducted while the instructional material is being given (along the way) to check for understanding and comprehension of the material as it is being taught. These can take the form of short quizzes, intermediate question and answer sessions, or application of intermediate skills to solve smaller problems. Summative assessment is a determination of a student’s overall understanding of the material. This is completed at the end of a lesson or school term to determine if students can aggregate and apply their knowledge to solve more complex problems. All assessment is, to a degree, formative in nature in that educational systems are designed to build on previous knowledge, but for the sake of this paper, we will assume that the instruction is conducted over a finite term and we’ll use the term “formative” for intermediate testing, and “summative” for cumulative testing (Starkman, 2006).

Assessing students has several advantages, the most important for the process is that instructors and institutions can determine which elements of their instruction or pedagogy work well, and which need improvement. This is critically important and determine by testing trends. That is to say that if a population of students incorrectly answers or applies incorrect skills, the instructor would know to revisit their methods of instructing the material. The most valuable testing methods for instructors are those that require application of a thought process, or scientific method to arrive at an answer. By reviewing the steps a student took to arrive at an answer, the instructor can determine to a more finite degree where the misunderstandings lay, both for individual students and populations of students. In short, assessment has the advantage of improving both student and instructor performance in any learning environment.

What happens when we don’t assess? The easy answer is that we don’t know. By this, we mean, that without assessment we don’t know what we don’t know. If institutions only conducted formative assessments, they would only have knowledge of their students’ understanding of each individual chunk of material in the shorter term. If institutions only conducted summative testing, they would only have an idea of a student groups’ larger synthesis of the material, without understanding the steps along the way. If neither of these is conducted, we simply don’t have any measurable evidence or proof of students knowledge, or on the other side of the coin, the institution’s efficacy in instructing the learning material. Finally, Stiggins et al. contend that assessment has a direct and important relationship with student motivation, and these assessment tools should be designed with this in mind (p. 27).

Put simply, assessment is not a luxury, rather it is a requirement. Schools must assess students, both formative and summative in order to generate objective feedback and help to create learning experiences that motivate students and drive improvement of educational material and classroom pedagogy. Both assessment tools should give schools the information needed to improve their instruction to best meet the student’s needs and the educational objectives.

General Guidelines for Assessment
The goals of assessment should not change based on the educational context of the learner, rather the methods and opportunities for different types change. This section will speak to some general guidelines for assessment that instructors should follow in order to be successful in attaining their objectives. The general goals of assessment, as outlined in the previous sections, are to:
 * Improve instruction
 * Improve student understanding of material
 * Enhance student motivation to understand and use material
 * Report learning outcomes (Department of Defense Education Activity, 2008)

Further, assessment answers the following questions:
 * Where is the learner going?
 * Where is the learner right now?
 * How does the learner, and learning system, close the gap? (Cole & O'Brien, 2007).

To meet the goals and effectively answer the questions, instructors must carefully design assessments that give them the necessary information. Assessments take many different forms:
 * Short answer quizzes
 * Essay questions
 * Academic papers
 * Multiple choice and T/F tests
 * Group problem solving exercises

This list barely scratches the surface of the world of assessment techniques, but most loosely fall under one of the above categories.

There are many different ways to assess learning and this chapter will not judge the efficacy of one method over another, rather present those that are commonly used, and how they can best be implemented for each educational context.

Assessment of Traditional Learning Experiences
Most students in the world are engaged in “traditional” learning experiences, where there is a classroom, an instructor, and books. Even online education experiences rely on this basic learning context. The field of assessment is ever changing, but there are some basic techniques for assessment that are followed roundly in the traditional educational field.

Tools
Short answer quizzes are used as both a formative and summative tool in the traditional environment. They can be used in the form of a one-sentence response covering the who, what, when, where, how, and why of a subject, all the way to a paragraph-style answer. The short-answer quiz allows the instructor to determine whether the student can grasp the concepts, and identifies essential concepts and interrelationships (Angelo & Cross, 2011).

The longer form essay question is an expansion on the short-answer quiz, in that it allows students to expand their explanation of the topic matter and allows teachers to get a closer view into the student’s synthesis of the material.

Academic papers are used as both a formative and summative tool in that they can be used during the semester to check for acquisition and synthesis of sections of material, and as a capstone to a course or series of courses to assess the student’s greater understanding of the body of learning.

Multiple choice and true/false tests are used to test rote knowledge, and are the simplest way to gain feedback on a larger scale. While these question types are not best suited to test for synthesis of educational objectives, they can be effective as a test for knowledge if questions are carefully designed. “Constructed response” (short answer) questions are more effective than multiple-choice questions at gauging learning outcomes (Clariana, 2003).

Finally, group problem solving exercises test the application of knowledge through having a group of learners work together to solve a problem, or complete a larger task. There is a broad array of manners to implement this strategy, as in completing a long-term group project, or a short-term team exercise testing session. Group work has the potential to increase understanding though peer-to-peer instruction, and a common goal, but has the potential to encourage social loafing or other negative group behaviors.

Grading
Grading in the traditional classroom and internet-based distance education is almost exclusively completed by hand, by the instructor or teaching assistant. The exceptions are those tools that require quantitative grading schemes, such as multiple-choice and true/false answers. The non-objective testing methods are currently subject to bias in grading, while the objective tests are difficult to create in a manner that tests synthesis of knowledge. Objective testing techniques are well suited to large groups that may be time-prohibitive for instructors to score, while the longer-answer questions like academic papers and short-answer questions allow students to reflect understanding of the material to a greater depth. There are tools to help instructors create tests that reflect a higher-order of understanding using Bloom’s Taxonomy as a measurement of question efficacy (Carneson, Delpierre, & Masters, 1996). Generally speaking, traditional learning is assessed by the instructor or teaching assistant. There is human interaction, reflection, and response to most of the testing methods. The exception is the multiple choice and true/false regime, which can be graded completely electronically and without instructor intervention. Without modification of the way these are implemented, it would be very difficult to port these to the open learning environment.

Assessment of Open Learning Experiences
Although MIT’s Open CourseWare has been in service since 2001, open education is still in its infancy, as no cohesive certification or assessment techniques have been created to allow students to demonstrate their acquisition of the knowledge in the subject matter (Massachusetts Institute for Technology, 2011). In this sense, learners using the open learning systems widely available are left to their own devices for assessing their knowledge. MIT’s OCW, for instance, simply places courseware on the internet for usage. Many of the instructors have included computer-based formative and summative knowledge tests, and videos explaining solutions to homework problems, but there is no requirement, nor is their built-in feedback for most of the assessments. The U.K.’s Open University will not be addressed, as for the purposes of this book, it is not truly “open” in the sense that the university charges tuition, and has live feedback for students.

How does assessment work, then for open learning environments? In a word, it doesn’t - yet. As previously stated, there are courses in which the instructor has created a computer-based (or otherwise) test, but largely, they place the course and leave it for passive consumption. In order to create learning environments that not only provide knowledge, but also test for acquisition of that knowledge, open learning environments need to create a system that meets the goals of assessment stated earlier in the chapter:
 * Improve instruction
 * Improve student understanding of material
 * Enhance student motivation to understand and use material
 * Report learning outcomes (Department of Defense Education Activity, 2008)

Improving instruction is still important in open learning environments. Instruction and learning should never be static, rather it should be adaptable and changeable. MIT’s OCW site does not allow for this, and therefore a meta-site with //true// open learning should exist. This would allow instructors and students to remix and reuse original course material in order to create and refine courses to better meet their individual needs.

Improving student understanding of material is one of the goals of education in general and should be no different for open learners.

Enhancing student motivation to understand and use the material builds on the first two: there must be feedback for students in order that they may better understand what they don’t understand, and how to fix that.

Finally, reporting learning outcomes is a very important aspect if open learning is ever to move to an accredited status alongside traditional colleges and universities.

What needs to be created
A major question remains: Do learners have humans to grade their work, or for true “open learning” is automatic grading required? If students can depend on human interaction, the assessments need not change at all. This isn’t that far-fetched, as wealthy donors and institutions have funded building blocks of the OCW movement, such as the Flora Hewlett Foundation, Bill and Melinda Gates foundation, the MIT endowment, among many others. If the resources exist, and donors are truly dedicated to the idea that all students deserve access to education, why not pay a group of adjunct faculty to instruct? The spread of online educational opportunities has proven that students and instructors need not occupy the same classroom for effective learning to occur (Simonson, Smaldino, Albright, & Zvacek, p. iii).

For this chapter, we assume that live instructors are not part of the open learning environment. This leaves the options of peer review, self-assessment, automated scoring, and paid professional review. If the learning and learning material is no different from traditional, why should the basic assessment tools be? Recall the general assessment tools:
 * Short answer quizzes
 * Essay questions
 * Academic papers
 * Multiple choice and T/F tests
 * Group problem solving exercises

There is no real need to re-invent the wheel in this regard. The question is how to adapt these to the open learning environment to meet the educational objectives for each student.

Short answer quizzes, essay questions, and academic papers are the greatest challenge to assessment, but need not be. The options listed above for grading each have advantages and challenges:
 * Peer review: Students will get no-charge review of their work, but it may not be conducted by an expert. This is easy to implement, but hard to verify quality.
 * Self-assessment: Students still get no-charge review of their work, but again, they will not likely get the quality review they need to truly improve themselves.
 * Automated scoring: Technology is being created to allow the option of automated scoring of short-response answers and even longer essay questions and academic papers. While the notion that a computer is grading an essay is unpalatable to some, in Wang and Brown’s 2007 study they were shown to give statistically significantly higher scores than human graders, but with newer technology, could be a viable solution to the open learning grading problem (Wang & Stallone Brown, 2007). What computerized grading systems cannot currently provide is substantive feedback to a student’s essay, and this itself is one of the primary goals of assessment in the first place. At this time, there is no adequate substitute for human grading of short answer, essay, and academic papers.
 * Paid professional scoring: While this steps slightly outside the purest definition of open learning, there is the potential for vetted scholars and professionals to provide learned review of educational assessments. This could be the first step to credibility for open learning in the sense that by having critical human review of open learning assessment work could lead to a sort of open learning accreditation, while providing the students similar interaction to that of a traditional learning environment. Obviously the cost is a strong consideration, but there is also the possibility of private funding for this option, or encouraging instructors at accredited institutions to work pro-bono with one or a small group of open learning students.


 * Current Practices of Open Learning Assessment **

Many well-known universities, including Yale, Havard, Massachusetts Institute of Technology and Carnegie Mellon, currently provide or distribute open education resources online under each university’s “Open Learning Initiative” programs. However, learners' performance in these open education courseware are usually not assessed, unless such courses are recognized as credits in their tertiary education. In such cases, these learners have to pay a course fee to be assessed, and this could no longer be considered as true open learning.

Some open learning courses provide “Question and Answer” segments that are similar to the review questions posted at the end of each chapter in a textbook. However, such assessment techniques do little in encouraging cognitive thinking and understanding of the subjects taught. To enhance open learning experiences, open learning providers can consider including more interactive open learning assessment into their course design.


 * Possible Practices of Open Learning Assessment **

Since open learning is done online, it is similar to online learning. This means that there should be certain online learning assessment methods and tools that can be used for open learning assessment. However, we also need to keep in mind the important differences between open and online learning.

The two main differences between open and online learning are cost and support. For online learning, learners will have to pay for the course and instructor(s) will be assigned to the course to guide learners through the course online. However, as we already know, open learning is free education and it lacks the human interaction portion of traditional and online education. Therefore, any type of assessment method or tool used must be done online using technology. Course designers must make up for the lack of human interaction with sophisticated technology that reinforces information through both visual and audio channels. Like online learning, traditional tests should be a smaller part of the assessment / grading process. Peer-to-peer discussion, review and evaluation, self evaluation, feedback as well as as short quizzes used in online learning are possible open learning assessment methods.

There are many e-learning or computer-based assessment tools used by online courses to facilitate the assessment methods mentioned above. Some of these tools are listed here and they can potentially be used for open learning assessment. Some modifications to theses tools may be needed to cater to the unique learning environment of open education.

Blackboard and Moodle are two examples of online platforms that are used to create virtual classrooms or learning environments (VLEs) to facilitate interaction, evaluation, track grades and participation for online courses. Hence, this platform can be used for peer-to-peer discussions, review and critique for open learning assessment. However, the assessment functionality of VLEs are usually limited and too simple in designs and aims.

Questionmark Perception is a software designed to enable assessments to be made through short quizzes using a variety of question types such as multiple choice, drag and drop, checklists and surveys using Flash or Java elements. Questionmark Perception even supports laboratory preparation assessment questions where learners can construct experiments in virtual laboratories.

Other online test assessment software include OpenMark developed by Open University in UK used for both formative and summative assessment. In addition to supporting the full range of question types offered by other softwares, OpenMark also provides interaction and feedback capabilities. Maple Testing and Assessment (Maple, 2006) is an assessment tool for mathematical questions using the mathematical software Maple. For short answer or essay type questions, several softwares such as the e-rater and free-text marking have been developed to assess such assignments automatically. However, e-rater's grading is more concerned with writing style and linguistics rather than content while Free-Text marking machine adopted by Open University in UK cannot cater to questions that require answers in more than two sentences. Although at this point in time, there are no human substitutes for essay and paper grading, it is becoming more likely in the foreseeable future.

In summary, although open learning assessment can adopt some of online learning assessment tools and practices, such as peer-to-peer assessment and certain test assessment softwares, these need to be more sophisticated in design in order to achieve the same assessment results as traditional and online assessment.


 * Advantages of open learning assessment **

The advantages of open learning assessment is just like the advantages of any other types of learning. In general, assessment is used mainly to:


 * reinforce knowledge learnt,
 * evaluate learner's performance for accreditation purposes
 * <span style="font-family: 'Arial','sans-serif';">provide feedback to learners on their level of understanding and
 * <span style="font-family: 'Arial','sans-serif';">provide feedback to course providers on course materials to ensure that course is relevant, useful and accurate.

<span style="font-family: 'Arial','sans-serif';">These advantages can also be applied to open learning assessment.

<span style="font-family: 'Arial','sans-serif';">Unlike most traditional assessment methods, using e-learning test assessment tools provide learners with instantaneous feedback, thus enhancing the cognitive and learning experience. Learners will be able to review their assignments and quizzes shortly after they have been submitted. This will aid in cognitive processing. E-learning assessment tools also provide unbiased assessments with low error rate since grading is based on a common logic for all learners.

<span style="font-family: 'Arial','sans-serif';">Another important open learning assessment tool is peer-to-peer assessment. Such interaction, discussion and critque will encourage critical thinking and new ideas which should enhance learning experience for learners. However, since open learning is available to everyone in any part of the world with an internet connection at anytime, it will be a challenge to ensure timeliness and quality of such discussions.


 * <span style="font-family: 'Arial','sans-serif';">Considerations in open learning assessment **

<span style="font-family: 'Arial','sans-serif';">As open learning assessment is still in its infancy, course designers will be working with a blank sheet of paper. When designing open learning assessment tools, they will need to consider the standard instructional design principles in an unique virtual learning environment that is mostly unexplored. Some of the instructional design principles to consider are (Savery, J.R. & Duffy, <span style="color: #000000; font-family: 'Arial','sans-serif';">T.M., 2006 <span style="font-family: 'Arial','sans-serif';">):


 * <span style="font-family: 'Arial','sans-serif';">Online learning is more efficient and effective when its curricular structure is fundamentally based on the execution of activities.
 * <span style="font-family: 'Arial','sans-serif';">The activities have to be authentic.
 * <span style="font-family: 'Arial','sans-serif';">All activities that are part of the educational program are related to activities with greater range.
 * <span style="font-family: 'Arial','sans-serif';">The designed activities have to be a real challenge for the student's development of thinking.
 * <span style="font-family: 'Arial','sans-serif';">The instructive process has to be designed in a way that it offers possibilities that the student takes over the development of the activity execution.
 * <span style="font-family: 'Arial','sans-serif';">The activities have to imply some type of social negotiation and intervention.
 * <span style="font-family: 'Arial','sans-serif';">It is necessary to create learning situations facilitating a group analysis regarding the acquisition of knowledge and of the processes that support this acquisition.
 * <span style="font-family: 'Arial','sans-serif';">It is necessary to gather assessment information of every activity and introduce activities designed based on the assessment logic.
 * <span style="font-family: 'Arial','sans-serif';">Assessment has to allow the dynamization and guarantee the students' individual and collective learning processes.


 * <span style="font-family: 'Arial','sans-serif';">Challenges and future developments of open learning assessment **

<span style="font-family: 'Arial','sans-serif';">Although open learning assessment can use certain assessment methods and tools currently used by online learning courses, some challenges still persist as a result of differences in cost and structure of the two learning platforms. Being offered as a free education, open learning courseware providers may be strapped for funds to purchase high-tech e-learning assessment tools. In addition, without the availability of at least one instructor to facilitate the learning experience, open learning assessment options are further limited. Research findings show that computer based assessment tests little on cognitive, practical and subject skills. For open learning assessment to be more viable, it needs more creative and robust tools than the tools used by online learning.

<span style="font-family: 'Arial','sans-serif';">The ideal e-learning assessment tools for open learning needs to be dynamic and intelligent. It should be able to cater to a wide variety of assessment methods such as oral and essay assessments and be able to moderate and facilitate peer-to-peer assessment to ensure that it will not be a case of “the blind leading the blind”. The tools should also be as fool-proof as possible to avoid possible abuse of the software by learners finding ways to work around the automatic grading systems to gain points. For example, if points are given for participation of online discussions through the number of postings and not based on content, then this will encourage multiple / irrelevant postings. Such may be a tall order based on the current capabilities of course designers and software engineers.

<span style="font-family: 'Arial','sans-serif';">The important question to answer before investors decide to provide funds to develop more ideal tools for open learning assessment is – what are the aims of open learning assessment? Is it simply to test the understanding of learners on course materials and provide feedback to both learners and course providers? Or, will assessment be used to accredit open learning courses? Assessment for accreditation purposes will require a more sophisticated and wholesome assessment system than assessment for testing understandings. This is to ensure the quality of the open learning course as well as the accreditation that comes with it.

<span style="font-family: 'Arial','sans-serif';">However, accreditation of open learning presents its own challenges and is definitely a contentious issue. The first challenge will be for providers to ensure the authenticity of the learner, i.e., the learner is the one who is being assessed and accredited, not someone else. Some may also argue that providing accreditation to open learning is akin to providing accreditation to reading books, journals or any scholarly material since such are open education resources as well. In addition, there may be a conflict of interest for universities that develop open learning courses to accredit these courses without cost.

<span style="font-family: 'Arial','sans-serif';">For open learning assessment to be economically viable, it must prove its worth or all monies invested will be wasted like purchasing a white elephant. Therefore, open learning should first gain momentum and popularity with learners to demonstrate its potential. However, the current situation is that even with the increasing abundance of open education resources online, there is little impact on its usage and the conduct of traditional higher education. The main reasons for this would be the lack of quality checks, credibility and accreditation on open learning.


 * <span style="font-family: 'Arial','sans-serif';">Conclusion **

<span style="font-family: 'Arial','sans-serif';">Open learning assessment is still a very new area that has yet to see much development. More can be done to make open learning experiences more attractive and beneficial to learners. As a start, open learning course designers and providers can look to the current e-learning assessment methods and tools used by traditional and online learning courses. These should help to facilitate peer-to-peer assessment, simple question assessment, and learner and course feedback. However, to achieve a more holistic and credible assessment of the open learning experience without the human touch, some customization to the current methods will be needed. The amount of customization will depend on a variety of factors such as funds available, growth potential of open learning and accreditation. These factors also present challenges that need to be addressed. In particular, questions that need to be answered are whether an investment in open education are worthwhile, and is accreditation of open learning viable and justified?