The simple research question "how did you recognise success in your website" led to an unexpected level of obfuscation and even prevarication (this is clearly flagged as an area needing further, and urgent, investigation). Below are some of the possible "flags" for success, together with the confusions that surrounded each. This section concludes with some suggestion for judging the "success" of a GridClub:
a predetermined rate of return is exceeded: for commercial sites this should be straightforward - advertising or other revenues pass a predetermined "success" gateway. In practice even this simple commercial model was clouded by time scale, by notional future revenue potential indicated by raw log-ons and by a general commercial view that not being "there" was so catastrophic that almost any cost was justified to "be in the game". For non commercial sites the confusion was greater still; was a level of learning productivity sought? or an enhanced national aggregate future income stream? or was a cost efficient model of professional development the return? It was never clear and most projects examined were short term in duration - "pilots" of one sort or other.
A number of visitors per unit time is exceeded: few properly understood simple distinctions like "visitors" rather than "hits" ( a single visit to a page would normally register many hits for example). Open registration sites always assumed all registrations were new rather than repeats after details were forgotten or lost. Outside of our own experience in the 'lab no one appeared to have attempted to quantify the number of return visits, or to map "activity" to a pattern of "individual visits". This was an oft quoted, poorly understood measure of success.
Learning productivit: productivity defined as being either the same learning for less investment or more learning for the same investment. We found no Internet based learning projects that had adopted this measure of "success" although it seems a highly appropriate one. One reason may be that the learning gains described are often new learning gains and hard to quantify?
Change is induced, and managed: some projects intended that this would be an outcome but measuring and attributing change on a constantly moving "conveyor belt" of technology is problematic. For example claiming as a measure of success the increased use of computers by teachers at a time when the general population is increasing personal use for various social, economic and pragmatic reasons is insupportable. The issue is of attribution not quantification.
Experience is gained: Anecdotally quoted as the main parameter of success "we learned a lot about something we understood imperfectly" but this was rarely the declared measure where there was one.
An undesirable outcome was averted: this measure saw the opportunity cost of inaction as high - perhaps the alternative to a project activity was children becoming unduly exposed to advertising elsewhere, or a loss of market to a commercial rival.
Project specific outcomes defined by some initial bidding process: where funding was clearly specified and sourced for a particular purpose (examples included Health Education or Financial Awareness for primary age children) the measure of success was defined ad sought through evaluation - usually by a combination of questionnaire and interview - but rarely derived directly from website activity.
Pour encourager les autres!: showing how on-line learning projects should be done in a way that starts from educational and quality outcomes. A good and high profile project can certainly lead to real change across the whole industry, but the exemplar must very good and demonstrably so - it would also need to evidence some of the "successes" listed above for example.
A final word of caution about short termism and parameters of success: it is clear that a successful project requires several iterative steps and a long pilot - a year would be typical. However it is not possible to pilot an ambitious website with a severely reduced subset of functionality. The project needs to start with a few children but a with a fully functioning (and changing) website. Inevitably this leads to a project running initially for a substantial period of time with a limited number of children (the pilot children) but with an expensive server and mediation infrastructure all of which looks very expensive indeed measured against some parameters of success. However, there is no alternative if success in the full scale roll out stage is to be guaranteed.
In the context of GridClub there are a number of possible parameters of success; selecting from these will help determine the future shape and activity of the GridClub. For example: if the parameters of success are derived from long term Treasury targets to develop a high value, high income stream future work force then GridClub activity would, at least in part, be determined by some prognostication of the skillset of that future work force (collaboration, annotation, information management, fast response flexibility?). If however the number of site visits (or repeat visits) were the chosen parameter this would steer the GridClub in a wholly different direction - seduction rather than induction, so to speak. In every case success needs to be measured in as long term as is possible within the limits of the political system.
It may be simply that the last measure (Pour encourager les autres) would be enough. So little of the new education standards agenda seems to have made it through into Internet activity that any success in this area would be enormously welcome. Beware however that this is probably the most expensive objective to achieve because it would need to be done so well, and so transparently.
return to contents page