Upcoming workshop presentation: ‘XVivo: The case for an open source QDAS’

I will be doing a presentation on the need for qualitative researchers to embrace open source software and my work on Pythia as part of the Urban Studies’ Monday workshops at the University of Glasgow on 26th November.


Qualitative data analysis software (QDAS) has the potential to revolutionise both the scale of qualitative research and the array of possible analysis techniques. Yet currently available software still imposes unnecessary limits that hinder and prevent this full potential from being realised. Additionally, it locks data and the analysis performed on it within proprietary file formats that makes the archiving and sharing of research difficult. Due to similar issues, open source solutions have seen increasing popularity in quantitative research, and it is perhaps time that qualitative researchers joined them. This presentation will therefore discuss both the issues of current proprietary QDAS as well as the potential of open source software for qualitative researchers. To do this, the myriad of issues experienced with NVivo by the Welfare Conditionality project will be used to exemplify the problems created by a reliance on expensive, slow, and poorly designed proprietary software. The second half of the presentation will focus on Pythia, an open source QDAS library written in Python I have been working on. Through covering the design philosophy, current progress, and long-term plans the potential of open source will be highlighted for being able to solve problems with current qualitative software, enable new creative analysis techniques, and allow researchers to reclaim control of their data.

The workshops, as far as I am aware, are open to Urban Studies’ staff and PhD students only. However, as usual I will upload a copy of my presentation slides after the event. Additionally, as part of the preparation for the presentation I will be aiming to write a few short blog posts on the design philosophy of Pythia, elaborate further on why there is a need for an open source QDAS, as well as write-ups and screenshots of progress. Unfortunately, development ground to an absolute halt during the eight months where all my spare time, energy, annual leave, mental health, hopes, dreams, and general will to live were sacrificed at the job hunting altar. I now have around 12 months before that hell begins again, so once I have taken care of the journal article writing backlog that also built up during that time the plan is to filter work on Pythia back into my weekly schedule.

Upcoming conference presentation: ‘The Universal Acceptance of Conditionality?’

I will be presenting next month at the Welfare Conditionality: Principles, Practices and Perspectives conference, 26-28 June 2018, University of York.


Critics and campaigners against conditionality for welfare benefits have highlighted the severe harms resulting from sanctions and the stigmatisation of benefit claimants. In response, proponents of conditionality have oft replied with the refrain that “there has always been conditionality in the system” and point to high levels of support for conditionality from the public, including amongst benefit claimants. Yet, missing from these debates has been a detailed account of how claimants draw upon and construct justifications and critiques of welfare policy and practice. To fill this gap, this presentation explores the ethical arguments made by welfare service users who participated in the Welfare Conditionality project. Drawing on Boltanski and Thévenot’s (2006) theory of justification to outline the diversity of ethical orders participants called upon to construct their arguments, including the ways compromises and contradictions are defended or denounced. Even amongst the majority of participants who agreed with the general principle that abled bodied claimants should actively be looking for work, they rarely made reference to only one ethical order. Frequently it was argued that the sanctions regime is disproportionate and actively undermining the reciprocal duty to provide claimants with support. Furthermore, participants expressed concern that within the current welfare system there is a lack of a civic ethos amongst DWP and private contractor staff, a predominance of an industrial target driven service model, and a violation of human dignity and universal rights.

Welfare Conditionality final research findings

The final findings papers for the Welfare Conditionality project, that I was a Researcher and NVivo Lead on, have been published today. As covered in The Guardian, Benefit sanctions [were] found to be ineffective and damaging.

From the Guardian article:

Benefit sanctions are ineffective at getting jobless people into work and are more likely to reduce those affected to poverty, ill-health or even survival crime, the UK’s most extensive study of welfare conditionality has found.

The five-year exercise tracking hundreds of claimants concludes that the controversial policy of docking benefits as punishment for alleged failures to comply with jobcentre rules has been little short of disastrous.

“Benefit sanctions do little to enhance people’s motivation to prepare for, seek or enter paid work. They routinely trigger profoundly negative personal, financial, health and behavioural outcomes,” the study concludes.

The Canary has also covered the findings reporting that – The latest news on the DWP has left its reputation in tatters.

From the article:

A groundbreaking study, conducted over five years, has left the reputation and operating practices of the Department for Work and Pensions (DWP) in tatters. Specifically, the report’s authors heap criticism on one part of the department’s operations: the benefit sanctions regime. But a standout point from the report was that the DWP should [pdf, p12] “cease” applying sanctions to disabled people.

[…] The Welfare Conditionality project (2013-2018) was funded by the Economic and Social Research Council. ‘Conditionality’ is the idea that people who receive benefits should have to meet certain requirements, such as applying for jobs, or lose their payments. A ‘sanction’, in this context, means the withdrawal of benefits, normally for a fixed period. 

As well as the Overview paper there are separate briefings for the nine policy areas covered in the research –

Anti-social behaviour and family interventions

Disabled people



Lone parents



Social housing (fixed-term tenancies)

Universal Credit

Failure to Justify: The absence of a ‘natural situation’ with benefit sanction decisions

Copy of the slides for my presentation on Tuesday 10th April 2018 at the British Sociological Association’s Annual Conference.


UK welfare reform has seen sanctions become a crucial form of punishment for claimants who are judged to have failed to meet behavioural conditions. Drawing on data from an ESRC-funded study (2013-2018) of the efficacy and ethicality of welfare conditionality in England and Scotland (see: www.welfareconditioanality.ac.uk), the paper explores the ethical arguments made by 207 participants who reported experiencing one or more sanctions. These arguments are to be explored through Boltanski and Thévenot’s (2006) theory of justification, in detailing how participants justified / critiqued sanction decisions through reference to different models of justice. In making their argument, participants often pointed to sanction decisions not being a ‘natural situation’, one which has a clear flow to events in accordance with general principles. Participants reported being unaware their actions were sanctionable, felt that deferring sanction decisions to a ‘decision maker’ disempowered them, and that there was a haste to sanction without adequate opportunity to provide explanation. More broadly, the sanctions system was critiqued for having an industrial model of service provision, where claimants are ‘just a number’, and there being a lack of a civic ethos throughout the system. This pervasive sense of injustice, despite the acceptance amongst a significant number of participants of the general principles of conditionality, brings into question whether the current sanctions system is compatible with the criteria required to be a justifiable order. The paper will therefore also reconsider the debates between pragmatic and critical sociologies, particularly the importance of symbolic forms of domination and violence.

How to use a Word macro to fix interview transcripts for auto-coding in NVivo

Within NVivo, and likely other QDAS packages as well, it is possible to use the structure of interview transcripts for auto-coding. Basically, what auto-coding does is go through the transcript and using criteria specified by the user assigns text to chosen nodes (further explanation of auto-coding and how to do it in NVivo is available on the NVivo help website). This can be useful to separate out the different speakers within a transcript whereby everything they say is coded to a node with their participant code number. Even in one-to-one interviews this can be worth doing so that any word frequency queries, word clouds, etc can be limited to only include sections from the transcripts where a participant is speaking. However, any mistakes in the structure of the interview transcripts can result in them being incorrectly auto-coded. Depending on the extent and nature of the errors this can be a headache to manually fix. This post briefly covers what type of errors can arise and provides a set by step guide to creating a Visual Basic macro within Microsoft Word that can automate the process of fixing the paragraph styles in transcripts so they can be auto-coded without error.

Continue reading

A Qualitative Computing Revolution?

The challenges of data management and analysis on a large longitudinal qualitative research project

Computer aided qualitative data analysis has the potential to revolutionise both the scale of research and possible analysis techniques. Yet, the software itself still imposes limits that hinder and prevent this full potential from being realised. This post looks at the large and complex dataset created as part of the Welfare Conditionality research project, the analytical approach adopted, and the challenges QDAS faces.

The Welfare Conditionality project has two broad research questions in setting out to consider the issues surrounding sanctions, support, and behaviour change. Firstly, is conditionality ‘effective’ – and if so for whom, under what conditions, and by what definition of effective. And, secondly, whether welfare conditionality is ‘ethical’ – how do people justify or criticise its use and for what reasons. To answer these questions, we have undertaken the ambitious task of collecting a trove of qualitative data on conditional forms of welfare. Our work across nine policy areas, each of which has a dedicated ‘policy team’ that is responsible for the research. The policy areas are: unemployed people, Universal Credit claimants, lone parents, disabled people, social tenants, homeless people, individuals/families subject to antisocial behaviour orders or family intervention projects, (ex-)offenders, and migrants. Research has consisted of 45 interviews with policy stakeholders (MPs, civil servants, heads of charities), 27 focus groups with service providers, and three waves of repeat qualitative interviews with 481 welfare service users across 10 interview locations in England and Scotland.

Continue reading

Series Intro: Useful apps, services, and software.

This post is an introduction and placeholder for a planned series of posts on useful apps, services, and software. Once there are a few posts in the series I will eventually promote this post to a page with an index of all the posts from the series.

I decided to make a series for this because although using computers has become a key part of academic work, too many academics remain uncomfortable using them. Often I come across people using Word for anything that involves text. Not due to it being the best tool for the job but that they are unaware of the alternatives. Even when someone knows it is not an ideal solution, it is not always easy to find a good entry point to start learning how to use new software. At a training event I attended last year I was sat next to a professor. From the introductions he obviously had years of experience using an advance software package for tasks similar to the software the training was on. Yet, it became clear from the start that he was uneasy and disorientated when facing a new application with an unfamiliar interface. After accidentally launching another application and then opening the wrong file, that resulted in a garbled mess of symbols appearing on the screen, he got up and left only ten minutes into the session. While it is rare for someone to feel so at a loss that they leave, I have heard multiple times from PhD students that despite feeling like walking out they have persisted through a training session and still come away not feeling any more confident in knowing how to use the software. Such experiences end up reinforcing self perceptions of not being ‘a computer person’.

Continue reading

Tasker: Lone Worker streamlining

Tasker is one of the top reasons I regularly give when asked why I prefer Android to iOS. The app was designed to provide a means to run tasks based on contexts as defined by the user. For example, turning the notification volume on the phone to silent at night or reading out text messages received while driving. Through the wide range of triggers, actions, and third-party plug-ins it is effectively a visual programming language for Android devices that can be used for more complex applications. The above video shows an example of using Tasker to automate signing in and out as part of a lone worker procedure. The video shows version 1.0. After initial field-testing a small revision was made to add another screen that displays the details to be used for confirmation before sending the sign in text.

Version 1.1, therefore, includes the initial task that launches a scene through which a contact to sign in with can be selected. Then the sign in task grabs a list of any events from the calendar over a twenty minute window before and after the current time. From the calendar events each of the following are added to a set of arrays: event title (who the interview is with), location (the participant’s home address where the interview is taking place), and details (the participant’s contact number). A regex is then used to find the position in the event title array of any event who’s title begins with ‘Interview’ .  The full event details are then taken from the other arrays and alongside an expected sign out time are sent in a text message to the chosen contact.

Having signed in, a permanent notification is created through which it is possible to sign out or to raise an alarm if the researcher is in danger. Both require a confirmation button to be pressed to avoid accidental selection. The sign out task sends a text to notify the chosen contact that the interview is over, removes the permanent notification, and clears all the variables used in the tasks. Red Alert sends a text raising the alarm, and offers a notification option to call the contact as well. Finally, if the sign out task has not run by the expected sign out time, the phone vibrates and requests an estimate in minutes of how much longer the interview is expected to take. The entered value is then added into a text message in the format: “Sorry, running over, should be another X minutes roughly”.

As well as a general post on Tasker I am working on as part of a planned series of posts on ‘useful apps & services for academics’, I am also aiming in the next few weeks to have a short guide written for how to create within Tasker the lone worker sign in automation shown in the video.

Improving NVivo with AutoHotKey: Faster Attribute Values Input Script

The core component of the fieldwork for the Welfare Conditionality research project is an on-going three waves of qualitative interviews with 481 welfare service users sampled across nine different policy area. In order to assist with descriptive statistics and finding subgroups amongst our sample, we have a set of key attributes such as the participant’s age, household, benefits received,  etc. Furthermore, we have additional attributes specific to each policy area. Due to this, we have around fifty attributes in total that need values entered for them after each interview. By default NVivo offers three main ways to add attribute values, none of which are ideal for working with this amount of data entry.

The primary means of adding attribute data in NVivo is through the Attribute Values tab of the Node Properties dialogue window. This presents a list of drop-down menus for each of the attributes and can be laborious to work through. Similar to this is opening the Classification sheet and working along the row for the participant. In addition to having the same problem of developing RSI as the first method, this method has become nearly impossible to use as our project file has grown larger. Any change to an attribute value with the Welfare Service User classification sheet open now results in a 1-2 minute wait for NVivo to process the change. The third option is to save attribute data to an excel sheet and import it into NVivo. This introduces its own problems with ensuring values are typed correctly or setting up the excel sheet with acceptable values defined for each column, and still does not make any real time savings with the data entry process.

The above video is an example of using a script I wrote in AutoHotKey in order to provide another alternative. The script translates the keypresses on the numpad into a series of keypresses that select the desired attribute value and then moves focus to the next attribute. For example, if the second value for the selected attribute is ‘Unemployed’, pressing ‘2’ on the numpad would set the value to ‘Unemployed’ and move the focus to the next attribute so the user can press another numpad key to input the next attribute value. Alongside using post-interview checklists that have the number written next to each value, it greatly reduces the amount of time required for data entry. Further details about the script and how to use it are include below. The script file and an executable version of it are available from a Github repository.

Continue reading