panelarrow

Aisha Walker

Thinking onscreen

24 July 2014
by Aisha
1 Comment

Threlfall, J., Nelson, N. & Walker, A. (2007). Report to QCA on an investigation of the construct relevance of sources of difficulty in the Key Stage 3 ICT tests

This report, Threlfall,J., Nelson,N. and Walker,A. (2007). Report to QCA on an investigation of the construct relevance of sources of difficulty in the Key Stage 3 ICT tests, was published in 2007 and used to be available from http://www.naa.org.uk/libraryAssets/media/Leeds_University_research_report.pdf. Unfortunately, this link is now broken and it does not seem to be possible to find the original report on the NAA website and I am therefore making it available here.

The work was originally commissioned by QCA in the context of a plan to make testing of ICT skills compulsory for all 14 year old students in England from 2008.  The tests were piloted but not, in the end, implemented.  The Leeds team was commissioned to explore the question of whether on-screen testing might involve non-construct relevant sources of difficulty that are different from those in paper tests.  Our conclusion was that sources of difficulty are different on on-screen and paper-based tests and we proposed frameworks to illustrate the sources of difficulty in on-screen tests.  It is important to note that, although this research was conducted on the KS3 ICT pilots, the findings and frameworks apply to on-screen testing in general rather than the KS3 ICT tests alone.

Download report

4 July 2014
by Aisha
0 comments

Interviewing the literature: a tip for students

All academic research starts with a literature review and this is one of the big headaches for students writing theses or dissertations. Tutors tell students to ‘read critically’ and sometimes talk about “interrogating the literature” which sounds terrifying (perhaps you shine a bright light on the paper to make it give up its secrets). One way to approach the reading for a literature review is to imagine that you are interviewing the author.  In an interview you would ask questions such as “Why did you do this study?” “What did you do?” “Why did you do it that way?”  You would ask about things such as the number of participants; how they were selected; how the data were collected and analysed… You would also ask what the study found, why this is signficant and how the researcher can be sure that those conclusions are robust.  Every research paper and report should be able to answer these questions.  So, when preparing to undertake a literature review, start by writing a list of your ‘interview’ questions.  As you read each paper, note the answers to your questions.  If you can’t find the answer to a question then that is important and may be a weak point in the paper or help to identify a gap in the research.  Either way, an unanswered question is something that you should mention when you write your literature review.

2 July 2014
by Aisha
0 comments

Ethics and internet research

Facebook has been accused of a breach of research ethics because it did not obtain consent for its study into the emotional effects of manipulating user newsfeeds.  The overall terms and conditions for Facebook include the statement that information may be used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement” (https://www.facebook.com/about/privacy/your-info#howweuse).  However, researchers and legal experts have pointed out that the feed manipulation study went beyond gathering information and involved experimental interventions.  The study involved varying the content of users’ feeds to find out if this affected the users’ emotions.  For this type of study, university ethics committees typically require explicit informed consent; users need to know that they are taking part in a study, how the study will be conducted and (broadly) what the researchers are looking for.  Participants also need to know if there are any potential risks from taking part in the study and should be able to opt out at any time.  In this case, the risks was that a Facebook user would experience lowered mood and, however slight this effect might be, it is still unethical intentionally to expose people to this danger (in the cause of research) without their knowledge or consent.  In addition, because people did not know that they were participants in a study they did not know about the possibility of opting out.

The Facebook feed case shines an interesting light on the ethics of internet research.  For a researcher, the internet is a potential goldmine, full of data.  Do you want to know what people think ‘happy’ looks like?  A Flickr search will provide you with thousands of images to analyse and count.  Do you want to know how people mark, celebrate, struggle with and support each other through the long days of fasting?  The Twitter hashtag #Ramadan2014 will tell you.  This type of data is ‘wild’; the images and tweets have been posted publicly and are free for anyone to look at and to analyse.   However, most people, when posting online, do not think that they are creating research data.  They are simply publishing interesting photographs or talking to friends,  family or fellow commenters.  Therefore,  an important question for researchers is whether it is ethical to collect and conduct research on the data.  For example, one commenter on the Guardian article linked above cited the MELC (Metaphor in End-of-Life Care) study as an example of unethical internet research.  This study uses a large corpus of data, some of which is drawn from internet forum postings.  However, although people’s data, in the form of published posts, is used in the MELC study, people are not individually identifiable nor is there any possible risk to the people who created the dataset.  An example of Twitter research is Doughty et al (2011) who looked at tweets about the TV show The X-Factor. This paper includes quotations from tweets although the users are not identified.  However, the paper does not mention ethics and does not indicate whether  the study underwent ethical review.  It is unlikely that ethical review was considered unnecessary as the study was conducted with data already in the public domain.  Nevertheless, individuals who find their tweets or posts discussed in academic papers may  believe that their permission should have been asked especially if they feel that their words have been misrepresented.

Research into language use on the internet is still a relatively new field and some of the ethical questions are still to be worked out.  With ‘old’ media, even in digital form, researchers can assume that it is legitimate to analyse published texts and not necessary to ask permission of the writers or publishers. These texts tend to be created by professional writers who understand that when something is published then it becomes, to a certain extent, the property of the public * who may interpret, discuss and use it in ways not intended by the original author.  It’s painful for authors to think of their carefully crafted work being placed under cat litter trays but once they have  been published, texts are no longer under the control of their authors.  Online texts such tweets, posts, comments, images, YouTube videos, etc.,  are written by diverse authors, many of whom will not have thought about post-publication ownership.  These authors may not feel that collation and analysis of their work is legitimate and may not agree that through publication the work has been made available to researchers.

The Facebook feed study clearly should have undergone ethical review and it required informed consent.  With other types of internet language/communication research, however, the ethical position is far less clear. On a small forum, informed consent is a possibility but  it is not feasible to collect informed consent from large numbers of Twitter users.  Nonetheless, research into the ways that people communicate online is essential.  Without research it will be very difficult ever to understand incidents such as last year’s Twitter harassment of Caroline Criado-Perez yet a requirement for informed consent would make such research impossible.  This would, in its own way, be as unethical as conducting a study into the emotional impact of different newsfeed content without the knowledge or consent of participants.

* I am not talking here about intellectual property rights which remain, of course, with the author.

References

Doughty, Mark and Rowland, Duncan and Lawson, Shaun (2011) Co-viewing TV with Twitter: more interesting than the shows? CHI2011 Available from http://eprints.lincoln.ac.uk/12540/

2 April 2014
by Aisha
0 comments

The danger of the ‘digital natives’ myth

The ‘digital natives’ meme is extraordinarily pervasive.  The term has now been around for almost 15 years and has been substantially critiqued and yet is still in use.  For example, today I am due to speak at an event and the question I have been given includes this “Are you getting more ‘digital native’ trainees?” The ‘digital natives’ myth is damaging to two groups of people.  The first is the so-called ‘digital natives’ themselves for two reasons: the first is the assumption that young people already know how to use anything digital (and it is therefore not necessary to teach them) and secondly, the assumption that copious use of digital technology is necessary to engage young people.  The second group damaged by the myth is the so-called ‘digital immigrants’, in particular, teachers.  There is a risk that teachers assume that because they were not brought up with digital technology then they are automatically incompetent and will forever be lagging behind their students.  They believe that they lack ‘digital literacy’ and need training in order to be able to use digital tools in the classroom.  Teachers who believe the myth, and who believe that they are not ‘digital natives’ are likely to lack confidence in their own digital abilities.   I have spoken to rooms full of smartphone-owning educators who are happy users of Skype, Facebook and Twitter, who write emails daily and who word-process all their documents but yet believe that they are not ‘digitally literate’.  They cannot see their own competence because it is obscured by the belief that they are not ‘digital natives’.  These teachers need two things.  The first is confidence in their own digital abilities.  If you can use Word, email and the WWW, then you can learn to use and create many other digital resources.   For example, the procedural skills that you need to use Word will get you started with using Audacity to create digital audio.  The second is the vision to imagine how digital tools might be used in the classroom.   This is not so much a matter of specialist knowledge but of thinking about the areas of teaching that are problematic and then about how they might be improved.  For example, a well-known problem with classroom writing is that it is artificial and lacks an audience.  Digital tools enable student work to be published to an audience and in several different formats.  So, those procedural skills developed using Word can be applied to Audacity to help students voice the stories that they have written which they can then publish online so that others may listen.

As I said in my talk today, to work in digital classrooms, educators need to build their CVs – Confidence and Vision.

20 February 2014
by Aisha
0 comments

Misleading Metaphors

I have written elsewhere about White and Le Cornu’s (2011) metaphor of digital residents and visitors. The metaphor is reiterated by people such as Gavin Dudeney (for example http://englishagenda.britishcouncil.org/seminars/digital-literacy) and is used to categorise people into two groups: those who make extensive use of digital tools and those who do not. However, when used in this way this place-based metaphor is as false as the language-based metaphor (Prensky 2001) that it replaces. Place metaphors are widely used when talking about technology for example, the widely used term ‘cyberspace’. Gee (2005) talked about ‘social semiotic spaces’ and ‘affinity spaces’ arguing that these might be physical or virtual. Stevenson (2008) looked at ‘environment’ as a metaphor for learning technologies. However, as both Gee and Stevenson acknowledge, digital technology is not a single place but is instead a category of places. It is not generally assumed that people who reside in a city are automatically familiar with other cities and will have no trouble finding their way around if they find themselves in a city that they have never previously visited. There will be aspects of cities which are familiar and which may lead to frustration if not available. For example, residents of London are accustomed to an underground railway system and to supermarkets with 24-hour opening. This does not mean, however, that Londoners will automatically be able to navigate Montreal’s streets and ‘underground city’. Residents of Bangkok (or even Birmingham) may struggle to reconcile the London tube map with the geographical relationships between places in the city.

In the digital world, people who reside in, say Facebook, will not automatically be able to navigate a virtual learning environment (VLE) such as Blackboard. Indeed, anecdotally I have encountered a large number of students who are habitual users of digital tools such as Facebook, Twitter, Google sites, MSWord, email etc and yet struggle with our university VLE. This is not because the VLE is poorly designed or intrinsically difficult to use but simply because it is unfamiliar. In addition, the VLE contains context specific materials/activities and students are expected to use the VLE in particular ways that focus on learning (and a curriculum).

Metaphors that imply overarching digital competence (or otherwise) are problematic because they lead to an assumption that people who fall into the ‘competent’ category do not need to be taught how to use digital tools. This, in turn, can lead to learners struggling because they are not able to access essential information and/or using tools in ways that are inefficient or ineffective. Given that digital tools are widely used in 21st Century education this can put students at a serious disadvantage. Kennedy et al (2010) propose an alternative paradigm. They identify four categories ranging from ‘basic user’ to ‘power user’ but argue that each category applies only in relation to a specific tool. Thus, an individual may be a ‘power user’ of Facebook but a ‘basic user’ of Blackboard. In rejecting easy metaphors Kennedy et al have developed a way of looking at digital competence that is far more useful because it acknowledges that an individual can be both expert and novice in the various ‘places’ that constitute the world of digital technology.

Kennedy, G., Judd, T., Dalgarno, B. Waycott, J., ‘Beyond natives and immigrants: exploring types of net generation students’, Journal of Computer Assisted Learning, vol. 26, no. 5, 332-343 (2010).
Prensky, M. (2001) “Digital Natives, Digital Immigrants” On the horizon. Vol. 9 No. 5 http://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf
White, D.S. and and Le Cornu, A. (2011) Visitors and Residents: A new typology for online engagement. First Monday, Volume 16, Number 9 – 5 September 2011
http://journals.uic.edu/ojs/index.php/fm/article/viewArticle/3171/3049

7 February 2014
by Aisha
0 comments

Personal and professional

Further to my previous post… I decided to create two blogs firstly because I wanted the academic theme to be more coherent and to make it easier for readers to find the academic material.  However, I also found that that I was holding back personal content because I felt that it didn’t fit within an overall academic theme. I don’t post anything that is inappopriate to my professional life but if readers are looking for a discussion of ‘digital natives’ do they really want to read about crochet flowers?

By the way, if you do want crochet flowers, there are some at www.cosyrosie.net 🙂

23 January 2014
by Aisha
0 comments

Sandpit Reflections (part two)

What I learned from and about the sandpit experience

Without doubt, the sandpit was, as my Head of Department put it, “a valuable learning opportunity”.  If I ever have the opportunity to attend another sandpit, as I hope I will, then I would approach it differently.

The basic principle of a sandpit is that a diverse group of people is mixed and stirred in the hope that something rich, wonderful and innovative will emerge.  In the EMOTiCON sandpit there were researchers from Psychology, Philosophy, Computer Science, Sociology, Neuroscience, Education, Linguistics, Marketing, Drama, International Relations… and more.  This is a range of disciplines which would rarely talk to each other and our perspectives on research and our approaches to research are very different.  This means exposure to new and exciting ways of working but also requires flexibility and receptivity  As one member of the group said, “When I came here, I thought that there was only one way to measure empathy.  Now I realise that there are many ways to do it.”  In addition, in the second stage, projects and groups change rapidly.  The landscape of phase two has underwater volcanos so that with each feedback round new islands emerge, other islands sink and some simply change shape.  Whilst this is exciting it can also be unsettling and disorientating, especially as it involves the formation of new alliances and may evoke memories of the school playground: of choosing and being chosen.

The support provided by the mentors and facilitators was extraordinary.  Naturally, all of them understood the sandpit well and knew that the intensity of the process may create an emotional pressure cooker.  On this sandpit there were no real emotional explosions but I would not be surprised if sandpits occasionally contain some pretty uncomfortable conflict.  The mentors can provide listening, advice and, if needed, mediation.  I was extremely impressed by the skill of both mentors and facilitators and also by their commitment to the process and the extent to which they cared about participants.

I was surprised that there was no notepaper.  I had assumed that because the event was at a conference centre, notepads would be provided.  Instead we had flipcharts and post-its.  In fact, I did feel the need for notepaper for working through ideas but the rationale for flipcharts and post-its (I think) is so that all thinking and ideas are in the group; everything is posted where it can contribute to the definition of ‘the problem’ and development of the research questions.  I would still take notepaper next time though (and my own coloured gel pens because I am sure that they help me to think).

What I would definitely do differently, if there is a next time, is make more use of  ‘buddy presentations’ and ‘sandbox’ slots.  We were allocated ‘buddy pairs’ before the event with the instruction to connect and find out about each other so that we could introduce each other to the group.  There were also a couple of questions to discuss. My buddy and I did our presentation early in the week and I think that we (well, I) could have used this more strategically.  ‘Sandboxes’ are strictly-timed two-minute slots where you can present individual ideas to the group.  Both ‘buddy presentations’ and ‘sandboxes’ provide opportunities to communicate who you are and where you come from in terms of skills, expertise, experience and philosophy and it is important to take advantage of this. Fairly late in the process one of the mentors said to me “You know, I’m still not really sure what it is that you do” and this was because I had not really stated it clearly enough to the group.

Some of my colleagues have asked if people took project ideas to the sandpit and my feeling is that this probably did not happen.  However, people clearly did take ideas about the sort of methodologies that they might contribute and the areas of literature that were familiar.  This, again, is about being clear about who you are (academically) and what you can offer to a project team.  Several people were on more than one project team so whilst two thirds of the participants were not successful with regard to funding, others were on more than one successful bid.  However, of four successful principal investigators (PIs) two belonged only to their own project team (by Thursday evening). The other two seemed to have concentrated more intensely on their own proposals than on the other teams to which they belonged.  It seems that focussing on a single project will maximise the chance of being a PI but keeping options open will maximise the chance of getting some funding.

The multiple rounds of project pitches lead to a lot of detailed and constructive feedback from the panel. I suspect that there were variations in the extent to which feedback was acknowledged and implemented.  The panel members, both funders and mentors were always available and always willing to talk.  This, alongside the speed of the process, is one of the great differences between a sandpit and normal research bid development. Normally, feedback comes at the end of the process and there is very little opportunity to respond.  In the sandpit there is not only the opportunity to revise the proposal in response to feedback but also to discuss the proposed revisions with the panel. This is invaluable and my observation was that some people may have made more use of this opportunity than others did.

By the end of the week, everyone was exhausted and most people seemed to be popping paracetamol by Friday afternoon.  However, the worst possible outcomes from a sandpit are to have met some amazing academics; to have discussed novel and exciting ideas and to have spent a free week in a luxury hotel.  The best is three years of funding for a research project which is genuinely engaging.  One participant said, whilst we were waiting for the final feedback and decisions from the panel, “If I don’t get it, what will be really disappointing will be not doing the project.”  He left with the best possible sandpit outcome. As for me, well, here’s to better luck next time!

Tips for sandpits

  1. Do make the most of individual presentations to the group.
  2. Don’t take fixed ideas about the sort of project that you would like to do.
  3. Do attach yourself to several project teams unless you are really, really committed to a single project.
  4. Do listen carefully to the feedback from mentors and funders and take advantage of the opportunity to discuss potential revisions.
  5. Do set out to enjoy yourself (but make sure that you have plenty of paracetamol).

The EMOTiCON sandpit was facilitated by Know Innovation and held at Cranage Hall.