Difference between revisions of "Evaluation blog posts"

From Scarlet Wiki
Jump to: navigation, search
 
Line 16: Line 16:
  
 
[http://teamscarlet.wordpress.com/2011/11/23/scarlet-focus-group/] SCARLET Focus Group
 
[http://teamscarlet.wordpress.com/2011/11/23/scarlet-focus-group/] SCARLET Focus Group
 
 
 
'''Evaluation and the Toolkit'''
 
 
 
The purposes of the toolkit is to assist others in creating a project which uses a mobile app and Augmented Reality to enhance the experience of working with a material object. One of the key things to consider when putting together a toolkit for any context, but particularly an educational one, is how you will evaluate the outcomes from the surveys and focus groups.  In a recent workshop, the SCARLET Team used a dialectical approach, initially to put down the key processes we used to evaluate those outputs, and then to imagine you could somehow talk to yourself in the past – the” you” who is just starting to put together an evaluative process – and then to tell your temporal doppelgänger what you need to know to make the process as easy as possible.
 
Reflecting on what is ostensibly a dialectic, what surfaced was the intuitive way we all worked and the difficulties involved in trying to reify those efforts into something usable to another.  We recalled that in working toward some sort of cohesive evaluative process for a project like SCARLET, it’s important to determine methods for making sense of how the project has developed, targeting key success factors and ideas for improvement.  There are a number of ways of doing just that, outside of more formal, academic assessment, which doesn’t necessarily point to the factors which might influence take-up of a particular project or pedagogical enterprise.  In addition, the evaluative process needs to be one which works toward accessing audience views before and after the experience of the project.  Initial surveys and focus groups were determined to be the best way for evaluation and were used in order to get an idea of what students knew about Augmented Reality and other technologies before, during, and after.
 
 
The focus group stage is often fraught with some of the biggest problems.  Generally, it is difficult to get people to attend, and if you do, the idea of incentives can often create problems.  With regard to incentives, it is important that they are presented as a “thank you” for participating, and not perceived as some sort of payment.  More problematic is the make-up of the group; you don’t want to have a group made up of individuals who are too similar or too different.  Liz Spencer, lecturer for the Social Research Association at the University of Essex, talks about the fact that the best kind of results generally come from a heterogeneous collective of people who are largely unaware of the subject matter but who are interested in learning more – in our case, a group of 3rd year undergraduates. 
 
 
The conceptual framework of a focus group, then, is one which helps the Team see and reflect on the design of the project’s platforms, technologies, content, and delivery, as opposed to getting an exit interview or simple feedback form we have all had to fill out after a workshop, e.g., “on a scale of 1-10, how likely are you to recommend this course to others?”.  That kind of information is valuable, to a degree, but it hardly tells you why somebody thought the workshop was valuable.
 
In addition, it is also key to understand that focus groups do not create quantitative data; they are not scientific studies, whose results we can point to and say, with confidence that, “95% of the users believe that brushing with Colgate improves their social standing at the office.”  Yes, they are participative in a branch of sociology, but focus groups again do not necessarily create that level of precise data.  Their inherent value lies in that they can potentially give us an idea of why 95% of the group felt or thought that way about a relatively ordinary toothpaste.
 
 
The next phase will be the exit interview of the academics, with whom we have worked.  In attempting to evaluate the overall success of a project, this bit of feedback is essential, particularly because there remains a strong emphasis on working with the HE sector.  As The Horizon Report of 2011 suggested, AR remains a technology to watch, both because of its relatively low cost, which makes it attractive to a range of projects, but also because it offers so many different kinds of applications to a range of possibilities, and education remains one which has only begun to be tapped in the UK markets.  Finding out how the academics perceived of the project, from its perception, through to its completion, is important to the continuing design of future projects, which involve AR, as well as the design and construction of the toolkit.  The interview’s design is straightforward enough and based upon comments that the academics have made, with regard to the project throughout, posing issues which underscore how it has affected their teaching, as well as the students’ learning.  As with the vast majority of the feedback gathered, these interviews will also be made publicly available through various means, most notably in the toolkit itself.
 
As a means of starting to get all these steps down for the toolkit, the project manager created a simple chart, with a “Process” column running down on the left-hand side.  For the group I was in, this column is where we plotted the various things that we have done to gather data, as I mentioned above: an initial survey of 3rd year undergraduate students participating in the pilot course and a focus group of those same students, approximately 12 weeks into their course, as well as writing up a brief explanation of that focus group, highlighting key “Lessons Learned”,  feeding those highlights back into a meeting for dissemination and internal evaluation, and writing up a case study on the project so far, and then a final online survey for the students who participated.  I wrote these down in the order that I did them, acknowledging now that the focus group may have been better placed earlier in the course and that the highlights could have been a simple blog post.
 
 
Working left to right on the same sheet, we worked toward that dialectic, to achieve some synthesis from the process: what would you tell yourself if you were starting the project fresh? 
 
 
• '''Self-completion survey:'''
 
 
o '''Considerations:'''
 
• Agree the most appropriate format for the survey (paper-based or electronic) dependent on the audience and        situation
 
• Be careful to create good open-ended questions
 
• Include a mixture of open and closed questions. Closed questions are easier to answer and provide more structured data but open questions may provide more detailed responses
 
• Split questions which have multiple subjects into simpler questions
 
• Create questions which encourage students to participate
 
• Plan design carefully to ensure ease of use and maximum survey completion rate
 
• Aim for shorter surveys to increase response rate
 
• Create clear, unambiguous questions and clear instructions
 
 
o '''Risks to consider:'''
 
• Too few students participate
 
• Data is irrelevant or incomplete
 
• Questions did not encourage engagement, i.e., students simply answered “yes” or “no” without explanation, can’t prompt or probe
 
• Surveys generally provide lower response rates
 
 
• '''Final  Online Survey (using Bristol Online Surveys)'''
 
• Create open-ended questions
 
• Ensure that each question focuses only on one item/issue
 
• Encourage students to participate
 
• Get academic buy-in
 
 
o '''Risk to consider:'''
 
• Students no longer feel obligated to respond since the module is over
 
• Students feel resentful that they feel compelled to participate
 
• Too few respond regardless of encouragement
 
• Online survey malfunctions or is inaccessible
 
 
• '''Focus Group:'''
 
 
o '''Considerations:'''
 
• Select members which are heterogeneous
 
• Consider how many group sessions are required or feasible and the size of the group
 
• Develop a session guide or plan
 
• If students, ensure that you get them early enough in the module, otherwise they are likely to give skewed answers, influenced by the lecturer or library staff, rather than based on their own perceptions
 
• Consider recording and transcribing the session
 
 
o '''Risks to consider:'''
 
• The group was too homogeneous, so answers don’t tell you much
 
• The group lacked any cohesion – no bonding, no sharing
 
• The environment was not conducive to the session, i.e., too warm, too cold, room was too small
 
• The incentives offered didn’t work to produce anything meaningful
 
 
• '''Results and highlights:'''
 
o Write up the highlights from the survey and focus group as soon as possible
 
o Share with others who may have also been present, to ensure accuracy
 
o Work toward getting quotes, although do not attribute them (confidentiality)
 
o Make it relatively short so that others can re-use in blog posts, case studies, etc.
 
 
• '''Dissemination:'''
 
• Consider a variety of dissemination methods and channels including:
 
o Blog posts
 
o Professional publications,
 
o Case Studies
 
o Conferences, meetings, workshops and events
 
o News items, articles and features on websites and in journals
 
o Academic journals
 
o Risks to consider:
 
• Academic articles require a great deal of time and the direct input of the academic; therefore, this form of dissemination is often most problematic.
 
 
Not every factor has significant risks, but it is important to consider what risks there are prior to embarking on any kind of evaluative process.  It would seem obvious, but the toolkit is meant to emphasise any of the steps necessary to move toward completion.  It also maps nicely back to a general bid-writing process, which also has to consider potential risks and pitfalls.
 
In the end, the project is valuable and significant, responding to the Horizon Report’s call to watch AR as a pivotal technology in the coming years (2010).  We have produced something which uses technology to enhance the experience of working with a material object, AND which importantly does not replace or get in the way of that experience; as I mentioned in my previous blog post, most students felt that the use of AR with Special Collections was, indeed, valuable to their experiences with the texts in Special Collections. 
 
 
Moving forward, this toolkit will also benefit from a new group of 1st year students lined up for a similar course, along with a group focusing on a fragment of the Gospel of John, under the tutelage of Dr. Roberta Mazza, both of which have exhibited a great deal of interest and enthusiasm for the content and the means of delivery.  If as Confucius said that “success depends upon previous preparation”, this toolkit promises to be a useful and compelling aid in the creation of future projects involving Augmented Reality in a variety of places and contexts.
 

Latest revision as of 12:07, 13 July 2012

[1] SCARLET Dissemination Workshop

[2] SCARLET Evaluation

[3] Demonstration du projet SCARLET

[4] Thoughts on the ELI Conference

[5] Demonstration Content

[6] The SCARLET Focus Groups: 2nd and 3rd Year Undergraduates

[7] The SCARLET Project Survey

[8] The SCARLET Project with First Year Undergraduates

[9] SCARLET Focus Group