This chapter presents an empirical account of deploying and evaluating the novel accounting systems produced in the previous chapter, and the design requirements that arise as a result.
Previous chapters outlined the work of a charity and its implications for design, and the process of designing accounting tools and some discussion around the reflection of that design process
This chapter picks up where the previous one ended in terms of events in that it details the long and drawn out deployment of the systems
The chapter discusses how the systems were deployed: when; what manner; and who.
It introduces some new participants who were not charity workers in the same sense, but used to evaluate the tools from an additional perspective.
Findings from the research are presented grouped
discussions are had making points about the performance of the evaluation, as well as design implications for future systems and for open data
This study and evaluation covers a staged set of deployments of the tools designed in Chapter 5 across. During the final stages of the “User-Centred Design” phase of research two additional organisations were approached and were involved in steering a set of changes to Rosemary Accounts in particular. In June 2017 the tools were deployed with a view of evaluating their fitness for purpose as well as iterating in some small way to iron out problems and increase uptake.
As noted earlier , this took place across several phases of research as engagement and uptake of the tools was limited at first.
As the deployment and evaluation phase of the research was extended across some time and a variety of methods were used to engage participants in evaluating the systems. These methods are discussed more in-depth earlier in the thesis; but are touched upon here in order to situate them in the research’s timeline more clearly.
Initial deployments occurred between July and September 2017
The intent was to see whether the tools had been designed in a way such that they were flexible enough to be appropriated by workers and integrated into their own workstreams rather than creating new working patterns.
Participants were basically asked to use the system as they wished with the notion that some check-ups would be performed.
These check-ups started fairly ambitiously with a weekly check-up at Patchwork, and a fortnightly or monthly check-up at the other organisations
Became clear during the process that participants were not actually using the systems. This was most surprising at Patchwork but made sense as it was the summer and the heavy summer programme was underway.
After three months of very little engagement it was decided to change tack.
Structured deployments were added as a way of creating a space within the fieldwork with which to engage participants critically in the systems.
A series of challenges or tasks were produced that were designed to walk someone through the system, and these were sent to each participant group after a brief chat to check viability. It was intended to visit the participants after the completion of each task or set of tasks to elicit their reflections and evaluations of the systems.
After an additional month it was clear that participants were not engaging in the tasks either
It was decided to turn field visits into evaluation sessions where the researcher would walk through the tasks with the participant. These resembled the traditional cooperative evaluation sessions.
These were audio recorded and transcribed.
These were performed across six months and based around the schedules of the participants. Later, hoping to capture some data which was unshepherded I issued a short challenge at the tail end of the summer to capture a “week in the life” using the tools. This was followed up with an exit interview with each organisation to summarise the research.
Interviews with Stakeholders
Interviews with stakeholders in the charity ecosystem were performed.
This was to evaluate how the system could be interacted with by others, and to see what work would need to be done to accommodate the other “side of the coin” in terms of “accountability work”.
Four interviews were performed.
Two with accountants. The first accountant worked for an accountancy company briefly mentioned in Chapter 4. This choice was made out of convenience (they’re local) and out of sheer morbid curiousity. The second was an independent accountant which I took to operate quite differently to the first, and was currently contracted to work with patchwork’s accounts.
Two were representative’s of small funding organisations. One worked for a local “Community Foundation”, and one worked for Big Lottery Fund (now the National Lottery Community Foundation).
In all settings, an interview schedule was prepared to understand the nature of the work performed by the participant
Then they were shown instances of the tool and walked throught the systems - with dummy data
They were then asked about their impressions of the tool and what extra work would be required to accommodate their work.
Sadly the second interview with the Big lottery fund was lost in a late 2018 crash which wiped the entire phone. Small notes from the session survived.
The context horseshoe (drowning in tags)
System use and non-use
Duplicating use; use of social media for photos exists so the standard/systems should be adopted to harvest this as well
Use of excel should have been leaped on rather than designing yet another shiny thing to force them to use – look at how existing data standards allow publishing via spreadsheets!!
TBC (Focused around desires for other stakeholders)
Commitment and action modelling
Rather than only modeling discrete actions it may be sensible to model actual commitments that a charity makes.
Then discrete actions may be linked to multiple commitments to demonstrate this
Commitments may come from anywhere – community, or funding, etc. This can be used to help narrow it down for analysis as currently the “weight” of intepretation is on those trying to make sense of the data or system designers.
Rebalancing creates a little bit of work for a charity to do, but systems can be designed to support them in this.
building from contexts and links and boundary objects
digital systems can support transparency processes by enabling a dialectical form of Transparency
dialectics is the process by which two things produce a synthesis
Systems can be configured to support this dialectics as an interaction between the stakeholders and the charity
Can address the problems with the tags
Goal isn’t to add all the tags in the world, but to find the tags which are there and which are important to you.