A Time to Help

Just another WordPress.com site

Practical Strategies for Assessing the Quality of Collaborative Learner Engagement June 30, 2013

Filed under: Uncategorized — Anita DeCianni-Brown @ 4:08 pm
Tags: , , ,

With over 70 learning styles, approaches and traits developed by educational scholars reach on-line learners in an environment that can sometimes be considered isolated?  This can be achieved by engaging the learner.  “Personal interaction is crucial to the success of all forms of teaching and learning.”  (p. 255)  Computer-supported learning allows for the kinds of interaction, but technology itself does not promote interaction.  “Technology requires human intervention in design and instruction to assure strong student engagement in networked settings.”  (p. 255) 


Setting the stage, or the course, starts right at the very beginning.  For example, most courses are designed to include an ice breaker.  “The icebreaker activity was a comfortable way to quickly get involved in the course.  It was an authentic way to get us actively engaged from the get-go.”  (p. 258)  This allows students to give a brief introduction about themselves.  I have been in courses where some of the icebreaker included questions that were directly related to the course, questions about previous experience or knowledge in the topic/content area, and even a top 5 list that included  a little bit of both.  Having the icebreaker allows the instructor to find out about the learners, and the learners to find out about their peers.  It built an immediate sense of community within the course.  Often there is a commonality in likes and interests that can enhance exchanges. 


In two of my previous courses, I used Wimba and VoiceThreads for my Italian class and Digital Photography class here at ESC.  In Italian, the use of Wimba was essential in the course (or other platforms with the same abilities).  Being able to write in another language is one thing, but being able to speak it requires a lot more practice.  Here we had immediate feedback from our professor on our pronunciation.  But do I think this medium would work in all course scenarios?  Not really.  I do believe that Wimba, “helped create a personal dimension in dialogue with the instructor.”  (p. 261)  It also led to a bit of personal connection with others in the course because we would typically “meet” about 10 minutes before the course, and then possibly chat a little after the course.   The use of VoiceThreads was a little different.  We were to create a multimedia presentation that included text, audio and visual.  It was not a live interaction, but taped.  This was an interesting site to use because depending on the project, individuals can post audio responses to a weekly discussion or to a project.  I can see this being useful in a photography course where people can add their weekly critiques in audio. 


According to Henri (1991) if one wishes to know what students think about something, find different, safe ways to ask them about that particular thing.  In addition to questioning students, the authors outline several possibilities to assess engagement in online learning.


Transcript analysis:  Using transcript analysis, one that is assessing decides on what information is desired.  They could be looking for evidence of collective construction of knowledge, evidence of critical thinking, social engagement, depth of dialogue or substantive analysis of course content.  What matters is that the assessor knows what they are looking for so they can develop a coding system which will accurately produce the answers geared to informational categories.


Third-party interview:  A means to seek “richer, thicker” insights into student perceptions about particular instructional purposes.  The interview can be conducted with an unbiased faculty colleague, a teaching assistant or peer student.  One that understands the course and it intentions, but doesn’t have a vested interest in the responses or identity of those making the responses. 


LMS “log counts”:  Offer unqualified counts.  They do not indicate anything about content, relevance, scholarship or responsiveness to instructional purpose of the student postings.


Outside the formal assessment box:  For ethical reasons, private student emails and private chat rooms/virtual cafes should not be viewed by the instructor.  Keep to third-party interviews or anonymous surveys. 


Ethics:  Elements of trust (confidentiality, anonymity and intellectual honesty) is interpreting the information extracted from the sources.  No information should be directly related to a student. 



LeBaron, J. & Bennett, C.  Practical Strategies for Assessing the Quality of Collaborative Learner Engagement.   E-Learning Technologies and Evidence-Based Assessment Approaches. Information Science Reference.(254 – 267)



Validity of Group Marks as a Proxy for Individual Learning in E-Learning Settings June 23, 2013

Filed under: Uncategorized — Anita DeCianni-Brown @ 6:52 pm
Tags: , , , ,

Working together in groups in college can be done for short-term assignments that can take a week up to a semester long project. Measuring group learning has raised some issues on the ability to assess the learning from the group work and the ability to measure the learning on the individual’s basis.  Often the group product results in the grade given to all group members.  The mark is the indicator that should reflect attaining learning objectives.  However, it may not always accurately reflect that.  The product may mean the learning objects set forth, however, one individual may have contributed greatly, where another may have given next to nothing.   


I have worked in groups both in a traditional classroom setting and via the e-learning environment.  In a traditional setting, this may be a little easier to organize.  We were in class together at the same time, we were given class time to discuss the project and work on pulling it together.  Working in the e-learning environment was a bit more difficult to organize because everyone worked at their own pace, even as far as contribution.  Assessment of work for the face-to-face class consisted of one group grade.  Assessment of work from the e-learning course consisted of a group project grade and an individual grade.  In both of my shared experiences, every individual contributed to the project equally. 


I may be making assumptions, not having had the opportunity to see the course through the instructor’s platform, but I would assume that monitoring the group (or cooperative learning) may be a little easier to assess the learning based on activity.  The example I used of my e-learning class’s group project, the instructor divided us into the groups he wanted us in, and a discussion board was created for each group.  In addition to our group discussion, we also posted our individual contribution to the project here for the others to review and comment on.  


Recently, my daughter’s college roommate was involved in a group project.  There were 3 students in each group.  On student contributed nothing to the project or even participated in the group meetings.  The other two individuals planned and completed the entire project, yet the same grade was given to the three of them.  It’s not to say that the individual who did not contribute did not know the material, but the grade they received was not an assessment of the learning objectives they knew – rather how the project met the learning objectives.


Lajbcygier and Spratt describe a group project of 66 students.  The project was carried out through a blended learning environment that went through almost half the semester.  The majority of the groups consisted of 10 students.  Students were assessed in the following manner:


  • Students separately rated each of their group peers on their contributions to the group.  It was a 5-point scale that included prompts such as, “participated consistently and reliably; usually cooperative; typically showed genuine interest and enthusiasm.” (p. 141)


  • Another peer assessment was done based not on the participation and attitude toward the project, but on the knowledge and understanding of the topic material.  Students were reminded of the learning objectives and asked how they felt their peers met those objectives.


  • The final assessment had learners make a relative judgement of the influence of the group to their topic learning.  Assessment questions included, “”discussion with the group often led directly to me learning new things, or to correcting inaccuracies; learning of project content was influenced in roughly equivalent proportions by project discussions as by my own personal study and reading.”


The student assessments were distributed after the project was complete, but before the final grade was given.  They were informed that their responses to the three assessments would be used to compare different approaches to modifying the marks that individual students might receive for group project work.  After the study was complete, they found that project outcome marks did not reflect an aggregate of within-group individual attainment.  Contributions did not relate to within group variation in individual attainment.  They tested two alternatives, individual students’ ratings for influence on their peers’ learning and individual students’ personal perceptions of the influence of the group experience on their related topic learning.  Neither of these alternatives that were evaluated were found to relate to individual learning.




Lajbcygier, P. and Spratt, C.  The Validity of Group Marks as a Proxy for Individual Learning in E-Learning Settings.  (2009)  E-Learning Technologies and Evidence-Based Assessment Approaches. Information Science Reference.  (136 – 150)


Learning and Assessment with Virtual Worlds June 17, 2013

Filed under: Uncategorized — Anita DeCianni-Brown @ 12:28 am
Tags: , , , ,

The use of 3D virtual worlds, such as Second Life (SL) can enhance the student learning experience in either a blended or e-learning environment.  With content being what is important, the varied delivery methods can create a more engaging experience where both content and learning skills can be practiced and assessed, directly and indirectly. 


Through the use of Second Life, “students can ground their academic knowledge in meaningful practice and rehearse skills through interaction with realistic environment.”  (p. 56)  Such is the case at Duke University School of Nursing in Second Life.  Dr. Constance Johnson, an instructor at Duke’s School of Nursing has created Second Life environment that meets the expanding needs of Duke’s student population.  Students enrolled in their distance-learning program may miss the opportunities of the face-to-face learning environment.  However, through SL, they are able to hear the instructor presentation, interact with others in the class in a virtual social setting and practice real-life scenarios in the virtual world that will be helpful as they go in to practice.  This does not substitute for the real-life observation time they need, however, it can be used as a virtual example of what could happen, and what steps need to be taken within certain medical events and scenarios.  As Hobbs, Brown and Gordon state, using SL allows for a closer relationship between the virtual world and real worlds.  (p. 56)  In this case, I would most certainly agree and can see the advantage to using this platform.



Duke is not the only big-named university that has used SL, Harvard and San Diego have also established their own virtual campuses.  They provide lectures and demonstrations.  Harvard’s instructor Rebecca Nesson said that the use of SL kept students engaged in the class and provided a sense of class community.  (Lamont)  This seems to fall in line with what Hobbs, Brown and Gordon expressed, that besides the content offered by instructors, SL offers intrinsic benefits by helping to expose students to novel applications of technology and practice communication.  There are also social aspects of learning that occur.  Second Life also provides a setting where students can create virtual activities that are based on their interests. 


“A key issue for the use of virtual worlds and Second Life in education is to identify the areas where it can extend or improve on existing provision.”  It offers a unique educational opportunity for discovery, social interaction and creativity through the following:


Telepresence – users are able to project thie point of view into the 3D world.  This can be used to increase engagement. 

  • Communication – voice, chat and instant messaging 
  • Learning by doing – help students build their own learning materials or demonstrations
  • Learning by becoming – role playing
  • Association – provides a context and content for reporting and reflection through associated e-learning and social networking portals. 


There are drawbacks to the use of Second Life.  In order to use SL, and other virtual worlds, one must be computer literate and technologically savvy in order to maneuver in and around the environment.  In my own experience for this module, I was using a Mac computer and 2 Macbooks just to be able to complete the scavenger hunt.  It took over 3 hours as I had to download 2 different viewers in order to be able to use the platform.  Other issues that can contribute to frustration is slower internet speeds.  If I were able to work through my technical glitches, I would like to be able to explore some of the classrooms in SL to be able to look at the design activities that take place.  I feel overall, there is great potential for using SL and other like virtual worlds in education. 









Duke University School of Nursing in Second Life:  http://www.youtube.com/watch?v=sL3D-59MbnY


Hobbs, M., Brown, E., Gordon, M.  Learning and Assessment with Virtual Worlds.  2009.  In C. Spratt, & P. Lajbcygier (Eds.), E-Learning Technologies and Evidence-Based Assessment Approaches. Information Science Reference.  (55 – 74)


Lamont, I. Harvard’s virtual education experiment in Second Life (2007)  Computerworld.




Assessing Teaching and Students’ Meaningful Learning Processes in an E-Learning Course June 13, 2013

Filed under: Uncategorized — Anita DeCianni-Brown @ 2:56 am
Tags: , , ,

E-learning provides learners the flexibility and convenience to either take courses or to complete their degree.  As programs continue to expand, instruction methods need to be able to meet growing needs of the diverse population of learners.  Hakkarainen, Saarelainen and Ruokamo conducted an assessment on the students’ perspective on how a digital video-supported, case-based teaching approach supported studetns’ meaningful learning.  The case-based teaching is inspired by Dewey’s ideas. 


The learning materials used included scientific articles, a book, web pages related to the cases, PowerPoint slides and a video.  The video added a valuable approach to the learning.  Students had to analyze real-life cases through a video simulating a social situation.  The purpose of the video was engage students to discuss the case.  Their Teaching and Meaningful Learning Model (TML Model) presented a design that showed connection between Teaching and Meaningful Learning. 


Teaching function included:

  • Design and organization of the learning environment for students’ meaningful learning
  • Support and guidance for students’ meaningful learning


Meaningful Learning was divided into two parts – process characteristics and expected outcomes. 


Process Characteristics: 

  • Active
  • Self-directed
  • Constructive
  • Individual
  • Collaborative
  • Co-operational
  • Conversational
  • Contextual
  • Emotionally involving
  • Goal-oriented
  • Reflective
  • Abstract
  • Multiple perspectives-oriented
  • Critical
  • Experiential
  • Multi-representational
  • Creative



Expected outcomes

  • Domain-specific knowledge and skills
  • Transferable, generic knowledge and skills
    • Information literacy
    • Metacognition
    • Problem recognizing identifying nad solving
    • Higher order thinking:  critical thinking, cretive thinking, reasoning, planning, analyzing
    • Abstract thinking
    • Collaboration and cooperation
    • Communication
    • ICTs
    • Self-directed learning


Anderson, Rourke, Garrison and Archter’s concept of teaching presence for e-learning settings that use computer conferencing.  They defined it as the ”design, facilitation, and direction of cognitive and social processes for the purpose of realizing a personally meaningful and educationally worthwhile learning outcome.” During my undergraduate coursework at ESC, I took Italian I and II.  As part of the weekly modules, we were required to participate in weekly discussion (the discussion could be video conferencing or just audio via Elluminate).  In another course, an instructor could have just required the class to do the written part, and to submit an audio assignment.  However, the use of the conferencing allowed for a connection that was created in the course.  We were able to chat for a few minutes before the beginning of the class and at the end, we were allowed to stick around and chat a little more if interested. 


Their assessment was part of an action research case study.  The objectives were designed to develop the teaching and learning processes to support a more meaningful learning.  Research objectives and questions included:

  1. How do the teaching activities performed by the teacher support meaningful learning for the students enrolled in thecourse, from both the process and outcome point of view?
  2. In what way does the video-supported case-based teaching used in the course encourage meaningful learning in term of both the process and outcomes?
  3. What kinds of emotions does the case-based teaching used in the course evoke in students, and why does it evoke such emotions?


Of the 73 items on the questionnaire: 

  • 9 statements focused on teachers’ support and guidance activities
  • 22 statements formulated to operationalize characteristics of the TML Model
  • 21 statements focused on students’ emotions as they were releveant to learning.  They included:  worry, comfort, boredom, interest, frustration, uncertainty, disappointment, satisfaction, enthusiasm, tension and embarrassment. 

Based on the responses, the top three emotions were challenge, interest and enthusiasm.  Students also responded that online collaboration was a principle source of joy, which supports the argument that social interaction is a powerful generator of emotions.  Though the questions used were not available, I would have to agree with the findings based on my own experience as an online learner.  I want to feel that the experience was a beneficial learning experience, or meaningful learning.  Challenge is an important aspect of the learning process. Though there are different variables that contribute to student dissatisfaction, lack of challenge would be one that if consistently happening, I would have to re-think where I was pursuing my education. 


Negative emotions were also cited, with stress and uncertainty ranking as the highest.  These came about out of the following reasons:  Tight course schedules, quantity of material to cover, inadequate directions for learning task, group dynamics and problems with learning management system.  The negative emotions and reasons also seem to be in line with discussions I have had with other online learners.  One of the difficulties in online learning is that it requires the learner to be an independent learner.  Not everyone is able to conform to that type of learning style, which can contribute to higher stress over the course schedule and the quantity of material, as well as feeling the instruction for directions is not what the learner expected. 


Hakkarainen, P., Saarelainen, T., Ruokamo, H.  Assessing Teaching and Students’ Meaningful Learning Processes in an E-Learning Course.  (2009)  In C. Spratt, & P. Lajbcygier (Eds.), E-Learning Technologies and Evidence-Based Assessment Approaches. Information Science Reference.  (20 – 32)


Validation of E-Learning Courses in Computer Science and Humanities: A Matter of Context June 11, 2013

Filed under: Uncategorized — Anita DeCianni-Brown @ 3:02 am
Tags: , , , ,

Though both traditional and e-learning universities and programs are facing student retention issues, e-learning tends to struggle more with retention.  There are many factors that contribute to retention issues.  These include engagement, feeling of isolation, instructor presence and instructor feedback.  In a survey conducted by Friedman, Deek and Elliot, it showed 62.9% of the students responding wanted an instructor that was more of an expert than a facilitator of the class.  Having an instructor be an expert can mean that the instructor has experience in the course they are teaching.  For example, I have been in photography courses that were instructed by someone with extensive experience in photography and an arts degree; graphic design courses taught by instructors who have both a degree and industry experience; accounting instructors who are CPAs.  Online learning is self-directed learning, with the role of the instructor to serve as the facilitator, however, they should be able to have enough experience to aid students when they are experiencing problems, to be able to offer constructive advice to help the learner grow.

In a 2003 study, conducted by Mock, 42% of the students in the study either withdrew from the computer science class or failed to take the final exam.  This rate was significantly higher than the 12% and 26% for the same class, taught by the same instructor in face-to-face format.  The article does not specify information related to the course – just that the same instructor taught both the online and face-to-face course.  It brought to mind an interview/conversation I had with Empire State College’s Director of Online Curriculum, Ellen Murphy.  She pointed out that courses cannot be taught in the traditional face-to-face environment and then just converted to an online course without redesign.  The article did not indicate that that was the case, it is just an assumption that I made.   Murphy indicated that the course design needs to be adjusted to fit the online learning style.  This needs to take in to account the amount of reading, assignments and projects that need to be done.  It will differ from the face-to-face courses.  Friedman, Deek and Elliot indicate this as well, based on Shaelson & Huang’s research (2003) where many universities lack faculty who are properly trained for creating and delivering online classes.

Frey, Faul and Yankelov (2003) cite elements for success in the online learning environment.  They found online posting of grades, sufficiently detailed and accurate lecture notes, well-defined guidelines on how to finish assignments and consistent and constant contact with the instructor.  The instructor should be accessible either by virtual office hours, email or phone.  There should be timely feedback to the student.  Learning styles are important to keep in mind.  Though the online learning environment is more geared toward independent learners; individuals can motivate themselves to adjust their style in order to achieve their goals.  Motivation is important, as is the instructor presence in the course.

The authors conclude that without critical skills of identifying, finding, understanding and using information, the student will be lost right from the beginning.  I do agree with this finding, however, I also feel that necessary steps should be made to assist students and make presence known in the class.  In traditional universities, early intervention steps are taken to keep students on track.  If an instructor is not giving proper and timely feedback, this can be a disservice to learner’s education.  It would be unfortunate to go halfway through a course, or to the end, without receiving feedback.  How can a learner know where and how to improve without the feedback?  Instructors can also make recommendations to students to seek additional assistance to help them be successful.  For example, there are many webinars and presentations done to promote learner success.  A recommendation to sit in on one of the seminars could help a student become a better researcher or become more engaged in online discussion.


Friedman, R., Deek, F., Elliot, N.  Validation of E-Learning Courses in Computer Science and Humanities:  A Matter of Context.  In C. Spratt, & P. Lajbcygier (Eds.), E-Learning Technologies and Evidence-Based Assessment Approaches. Information Science Reference.  (151 – 169)


Chapter 1: Assessing Teaching and Students’ Meaningful Learning Processes in an E-Learning Course June 3, 2013

Filed under: Uncategorized — Anita DeCianni-Brown @ 1:42 am
Tags: ,

            Reliability and validity in e-learning assessment is an area that needs to be updated, re-evaluated and continually developed as the online learning environment continues to grow. According to Markham and Hurst, Reliability is the extent to which an assessment is stable over time and the extent to which the results from an assessment device are reproducible.  (p. 2)  Validity is the extent to which an assessment device is assessing what it purports to assess and the extent to which the results from an assessment device represent what the learner has achieved given the objectives and outcomes of the unit/course.  (p. 3)


            From a historical perspective, educational assessments have changed over time.  Some testing has had to have modifications made to take into consideration validity across cultural groups.  In what I would call a traditional testing assessment, multiple-choice, true/false and open answer tests, I have seen some changes, that I wonder how valid the assessment of one’s knowledge is.  For example, I have seen open book tests and allowance of a “Cheat Sheet” during testing.  It is true the testing is a timed test.  One should know the information before taking the test, and often it has been said that the open book tests are harder.  But why offer an open book test where answers can be looked up?  “Cheat Sheets” are completely new to me.  Professors have allowed notes to be written on a sheet of paper, with certain specifications to the size of the sheet of paper.  As the semester progresses, the size of the sheet becomes smaller – yet as the students have said, so does their writing and shorthand notes to accommodate more information.  The authors state that, “Knowledge validity is firstly about the extent to which the assessment being submitted is the work of the student – that this is a valid sample of the student’s knowledge.”  (p. 11)  I have a hard time understanding how the use of open book tests and cheat sheets provide a valid assessment of student learning. 


            There are several types of assessment that are used in both a traditional classroom and e-learning environment.  In addition to a test, essays, projects, presentations and art work are types of assessment.  The way in which each type is evaluated needs to differ.  It is important to assess each work individually as opposed to comparatively. 


            Markham and Hurst suggest three areas for establishing reliability and validity:  educational, technological and process.  In their discussion on Educational Oriented changes, I agree with their suggestion of training students to become better evaluators of information they are receiving.  I equate this to what many traditional colleges offer, a required course in the Freshman Year – I’ll call it a Freshman Seminar Class.  Many students I have been in courses with have discussed the difficulties they have had in maneuvering the online learning environment, with finding materials, the use of the library resources, etc.  At Empire State College, there are several workshops, webinars and seminars that give an overview on topics, yet it still gets met with difficulty.  If this was a requirement, students may have an easier transition into an e-learning environment.  The concerns that they discuss of plagiarism have been in existence long before e-learning.  Pre-Internet days, I recall seeing signs and flyers on college campuses for individuals to write papers for a price.  The difference now is, one doesn’t have to look as hard to find them.  (p. 13)


            Technology Oriented changes include applying technology to help reduce the Knowledge Validity problems.  They also suggest that institutions demand additional attention be paid to quality constraints.  Some of the constraints include clear deadlines, student workload constraints and return of marked assessment within a specified time frame.  (p. 14)


            Finally they suggest changes in Teaching/Learning Process.  They discuss a structured approach to peer review.  When using peer review, it is important to not only be clear with the person doing the review needs to be aware of personalized twists in thinking.  In doing so, the individual will be to conduct the assessment not only on the content but also its process.  (p. 15)


Markham, S., & Hurst, J. (2009). Re-Assessing Validity and Reliability in the E-Learning Environment. In C. Spratt, & P. Lajbcygier (Eds.), E-Learning Technologies and Evidence-Based Assessment Approaches. Information Science Reference.  (1-19)