some thoughts on notschool research methodology

There are three key issues to be borne in mind when implementing and evolving a methodology for notschool and for the notschools that will follow, building on our progress:


What can / can't we do?

The notschool students are all individuals; some were keen to be in school, but unable to be there, some irreparably damaged by experiences of school and delighted to not be there.

Clearly there is no common base of experience only the common consequence that they are not in school and have not been there for some time. In this context many simple research strategies are not open to us: direct interview, questionnaire completion and standardised tests could all revive echoes of past difficulties and damage the entire project's contribution. Similarly there are parents who will be delighted and willing to comment but also parents who are unwittingly or consciously at the root of the student's difficulties, and any of the standard interview techniques might trigger a hositility which is further damaging. The problem is of course that we are not clear, indeed cannot be clear, which student and which parent is in which category. This means that whilst we have good ethnographic evidence - from critical event logs, from letters and notes, from recollection of direct contact, from actual activity on-line... we cannot ever get any uniform meter without placing progress at risk and clearly that would be unacceptable. What can emerge though, is consensus from the aggregation of this ethnographic evidence. An external evaluator will, by and large, be evaluating the methodology rather than collecting data.

It is also clear that when learning has gone as spectacularly wrong as it has for these students, with all their capabilities and strengths, many have a vested interest in a particular interpretation of that failure. Threading a way around the misinformation such as lack of esteem for the student, blame and indeed the many personal agendas, can only be achieved by those who are daily engaged in the project, and to that extent the project personnel are key witnesses for the research.

In short we need to protect the students above all else and this will inevitably affect the quality of the research evidence, but conversely it will vouchsafe their progress and hopefully offer clearer evidence of what does and does not work. It will always be better to have patchy evidence of clear progress rather than robust quantification of failure.


Where we are?

All these students differ. Indeed their diversity is a part of the reason that school has not fitted them well. There is no common base line in aptitude, engagement, fragility, creativity... anything. It is clear that some have retained an enthusiasm for one key area of learning: fishing, art, arithmetic whilst others have a broad interest in learning but no "major"; again diversity is the key characteristic. If we might generalise at this point the students are all more able as learners than the school system had predicted. Falling out of school was a loss of opportunity both for the students and for the schools, but this is a generalisation. In this context there is no generic baseline to measure project success against.

Harder still for research design as we cannot establish that baseline for each student individually either. In the two diagrams below a student at A in serious decline might be viewed as a success if two terms later they were still at A and the decline was arrested, but for a student at B, already moving forward, the project might be disappointed if they remained at B for even one term. We don't, of course, have the luxury of retrospective analysis and are unable to plot the lines illustrated, so we cannot apply a vector of change to any baseline for each student. The exercise is pointless.

What we can do however is to be confident about expectations.


Establishing a methodology

Our methodology is properly summarised in the project research paper. But it is worth reminding ourselves of some indicative practical axioms here:
  • This is an iterative project, methodology needs to be iterative as well. The probablility is of continuous change;
    Key task: we must make sure that research tools, and the researchers, stay flexible
  • Although we are looking to evidence success in learning, the assessment or examination systems may well have been contributors to the breakdown of the relationship between institutional learning for our notschool students. We are exploring some of this with exam boards but it means that our evidence of success in learning will be eclectic;
    Key task: we must make sure that we maintain a dialogue with assessment bodies to help them progress; search diligently for the broadest portfolio of certification opportunities
  • Although the Think.com environment records all contributions, and we are committed to active learning in the lab, we are also aware that "lurking" in learning (logging on, watching and following discussions without contributing) represents real progress where a student has completely turned their back on engagement in learning in the past;
    Key task: we need to evolve better tools for capturing and analysing these passive learning commitments and be clear that a lack of contribution is not necessarily a lack of engagement in some cases.
  • The Internet is vast and has plenty of high quality content. Much of our curriculum work includes pointers to these "external" resources. We need to be clear that independent learning using the web's resources is as exciting an outcome as seeing our students go to a library to pursue their own interests;
    Key task: we need to explore better logging of learning engagement with "external" resources.
  • More to come....
top

Return to contents page

heppell.net