Monday, December 12, 2011


WebPA was on the boil a few years ago and seemed to fade from sight like too many good projects.
Now it has been updated and could well be coming to a University near me soon.
I got to test-drive it today. I was trying to replicate a peer assessment I use Bristol Online Surveys for normally.
I opted out of using the VLE as the survey tool there can't handle the types of questions I need to use to do peer assessment.
Here are some observations from my quick trial which included the full cycle of uploading data through to viewing reports.
  1. Uploading data was a bit tricky using CSV files... I think it required me to include the headers.... 
  2. There was only one 'Template' for uploading staff/student data and it does not include the 'group' header... so needed to be added in. If you vary from the prescribed syntax it fails to upload... or at least it seemed that way... I say 'seemed to' because one of the uploads I thought had failed and that did not appear straight away, re-appeared later on... 
  3. The first time I met the form it requires a student to score themselves (self assessment) by default but this can be changed in 'Assessments' so that you can do peer not self AND peer assessment. 
  4. Nice to be able to email those who have responded or those who have not. 
  5. Setting the group marks was intuitive. 
  6. As a student fills in the form it highlights clearly where the student missed a score after having tried to submit a form. 
  7. You can also penalise students in the scoring if they do not respond. Perhaps this should be the same as the maximum percentage of marks they could have potentially obtained from the peer assessment? Need to see that one coming and make a policy decision.
A working group working on the case study
I think WebPA is a lot nicer to administer than my current approach, where I am clearly using a generic survey tool, rather than something especially designed for the task. WebPA also offers an alternative scoring system whereby students are required to share a certain amount of 'score' between the group members. That way they cannot all simply award each other top marks. Again, this would require some thought and expectations management before launching the assessment.