EdMedia 2014 keynote on learning analytics

At EdMedia 2014 [#edmediaconf] I enjoyed great AACE and Finnish hospitality 🙂

The critical stance of my keynote there seemed to resonate with delegates, who hear a lot about “Big Data” and analytics, but have reservations about the kinds of learning that such technologies may perpetuate. I sought to deconstruct analytics to clarify the ways in which an approach and how it is used embodies an educational worldview. Knowing this, what kinds of learners are needed for 21st century society, and what role can analytics play in advancing this mission?

Part of this emerging picture is what we’re focusing on here at LearningEmergence.net — redefining metrics that value qualities in the learner that many are talking about, but which are hard to evidence.

Here’s the replay + slides [pdf/pptx].

Abstract: Education is about to experience a data tsunami from online trace data (VLEs; MOOCs; Quantified Self) integrated with conventional educational datasets. This requires new kinds of analytics to make sense of this new resource, which in turn asks us to reflect deeply on what kinds of learning we value. We can choose to know more than ever about learners and teachers, but like any modelling technology or accounting system, analytics do not passively describe sociotechnical reality: they begin to shape it. What realities do we want analytics to perpetuate, or bring into being? Can we talk about analytics in the same breath as the deepest values that a wholistic educational experience should nurture? Could analytics become an ally for those who want to shift assessment regimes towards valuing the qualities that many now regard as critical to thriving in the ‘age of complexity’?

Bio: Simon Buckingham Shum is Professor of Learning Informatics at the Open University’s Knowledge Media Institute, where he is also Associate Director (Technology), overseeing knowledge and technology transfer to the OU. He researches, teaches and consults on Learning Analytics, Collective Intelligence and Argument Visualization. He co-edited Visualizing Argumentation (Springer 2003) followed by Knowledge Cartography (2008, 2nd Edition 2014). He served as Program Co-Chair of the 2nd International Learning Analytics conference, chaired the LAK13 Discourse-Centric Learning Analytics workshop, and the LASI13 Dispositional Learning Analytics workshop. He is a co-founder of the Society for Learning Analytics ResearchCompendium Institute and and Learning Emergence. In August 2014, he joins the University of Technology Sydney as director of the new Connected Intelligence Centre. WWW: simon.buckinghamshum.net

Learning Analytics + NICs for Systemic Educational Improvement


Personal reflections on 2 workshops and a lecture with Tony Bryk (Carnegie Foundation for the Advancement of Teaching), hosted last week by Ruth Deakin Crick at University of Bristol. What follows after a brief introduction to the concept of NICs, are my thoughts on the intersection of NICs with Learning Analytics. I made a number of connection points between the features of the DEED+NIC approach, and learning analytics, which I’ll highlight in green.

Screen Shot 2014-05-29 at 22.38.40

with Gulzhan AzimbayevaHarvey GoldsteinRuth Deakin CrickTony Bryk

Screen Shot 2014-05-28 at 12.16.29
A slide from Tony Bryk’s presentation

Introduction: NICs

The ideas of the human-centred computing pioneer Douglas Engelbart (dougengelbart.org) run like DNA through my work, I find so much depth of insight [see his Afterword to my book]. Doug showed the world in the 1960s many of the features that we now take for granted in our personal computing: the mouse, windows, hyperlinks, videoconferencing, direct editing of text on screen.

However, his work on making computers more intuitive as personal tools for thought was just part of his bigger vision for improving what he called our Collective IQ — humanity’s capacity to tackle “the complex, urgent problems” we face by working more effectively together.

The concept of the Networked Improvement Community (NIC) came from this work:

Screen Shot 2014-05-29 at 18.21.51

“A” represents how the organization or community goes about its core business or mission; “B” represents the process by which it improves its core business activity (through the efforts of individuals and improvement communities); an Improvement Alliance is a “C” activity; “C” is any activity that improves “B” activity. By definition, improvement communities operate at the B and C levels. Conversely, any time more than one person is involved in a B or C activity, it’s an improvement community. An important function of “C” is to network improvement communities within and across organizations, forming a C level improvement community, aka “C Community” or “Improvement Alliance” of representative stakeholders from a variety of B activities. Organizations can also join forces at the C level to create a more robust C function, forming a super Improvement Alliance.

Many people have explored and trialled this concept, experimenting with a range of technologies and ways of working that are designed to make evidence-based advances on complex problems. Eductional examples of particular relevance include the Carnegie Foundation’s DEED methodology and Alpha Labs, University of Bristol’s dispositional analytics research programme, the Learning Emergence network and its Evidence Hub, and the many Collaboratories for distributed research communities.

Tony Bryk used this alternative figure to show how “B” improvement clusters seeking to improve frontline “A” activities can themselves network to create a level “C” NIC:

Screen Shot 2014-05-28 at 12.18.08

Bryk and Gomez have documented the rationale behind their educational improvement science strategy in detail [pdf]. Their concept of an educational NIC cannot be applied to any collective, but comes with some distinctive features which I summarized as follows in the Edu-NICs workshop I ran:

Screen Shot 2014-05-29 at 18.24.43

Scavenging from healthcare improvement science

In his workshop and public lecture, Tony Bryk described how he has ‘scavenged’ as much as he can from the healthcare profession’s adoption of improvement science, which apparently about 25 years ago was where education stands today. It turns out that it has taken two and half decades’ concerted effort by the US Institute of Health Improvement (IHI) to establish a new professional discipline, working on translating innovation into scaleable practice. Healthcare shares with education a very similar gulf between academic scientific research and its reward systems, and the translation of insights into scaleable practice on the frontline.

Bryk pointed us to the work of Atul Gawande, who concluded his TED talk (18:10):

“Making systems work” is the great task of our generation for health, education, climate change, poverty…

This vignette from a UK Health Foundation movie shows how the concepts such as practice-based learning, collective intelligence and evidence-informed practice are now embedding, although, of course, nobody is declaring “mission accomplished”. Swap the words maternity care/hospitals/doctors/patients with education/universities/educators/learners — and it still all makes sense:

Bryk’s call to action is that within education, there is precious little of this systematic, systemic, intentional improvement methodology to be found. Education is still stuck where healthcare was, with a fixation on the scientific paradigm for truth, grounded in randomized controlled trials. Instead a new methodological paradigm is needed whose core question is not simply What works? in an isolated context, but How do we replicate and scale what works? across contexts. This is not because we can hope for ‘one size fits all’ solutions — quite the opposite — because we understand how important certain contextual factors are to the embedding of that innovation:

Screen Shot 2014-05-27 at 16.29.04

Analytics implication? By extension, the challenge of improving education applies to learning analytics (which are after all, new kinds of tools for supporting different kinds of pedagogy and assessment). Learning analytics faces the same challenge of bridging the gulf between academic research and frontline practice, and generalizing findings. As success and failure stories in the field emerge, there is exactly the same need to try and understand the contributing contextual variables. A distinguishing feature may be that learning analytics contains the seeds for its own success, in this regard, since computational and statistical approaches to identifying the most predictive variables from large datasets could be used to advance the field’s own Level C learning — not just the learning of the students being tracked at Levels A and B.

Implications for ICT

Moving towards thinking about opportunities for ICT to add value, I summarised a set of functional roles as follows:

Screen Shot 2014-05-29 at 18.33.05

It is no coincidence that the above defines a socio-technical infrastructure not only for professionals seeking to advance their field, but also for scaffolding students in authentic, collaborative inquiry. Given the challenges we face, at many societal scales, we need to train the next generation more effectively to design inquiries, make sense of complex, heterogeneous scientific and practitioner data, from multiple perspectives and epistemic traditions, via a diversity of human and computational tools, as well as learning the skills of collaborative knowledge negotiation and community facilitation in the role of ‘hub’ catalysts.

I then stepped through this cycle, as detailed in these slides:

The remainder of this note focuses on the role of analytics.

Implications for Learning Analytics

Understanding the interplay between different levels in complex systems

In a special issue devoted to complexity science, social science and computation, colleagues documented the frontline challenges that need to be tackled in modelling complex social systems, among them, multilevel dynamics: how different levels, and systems of systems, influence each other. For computational social scientists seeking to simulate a social system formally in order to understand its structure and dynamics, this is a basic research frontier. We are not so ambitious as to want to simulate the social richness of schools or courses, but the challenge of understanding how the macro and micro shape each other is at the heart of the difficulty of educational reform, and the challenge of creating what Bryk calls “practical theories and methods” which are robust enough to make the journey from academia to the front line, negotiating all the constraints of politics and practice on the way.

The learning analytics community recognizes the different levels of data and analytics that are now in play within educational systems, but has no good accounts yet of how these influence each other. George Siemens and Phil Long introduced this diagram to distinguish learning analytics that attend to fine-grained patterns in learner behavior from academic analytics that focus on the more static demographics and periodic course outcomes of interest to strategic decision makers in institutions:

Screen Shot 2014-05-27 at 17.56.16

In my own attempt to summarise the levels, I used micro/meso/macro terminology, and hinted at how the levels may start to inform each other:

Screen Shot 2014-05-29 at 18.38.16

  • Macro-level analytics seek to enable cross-institutional analytics, for instance, through ‘maturity’ surveys of current institutional practices or improving state-wide data access to standardized assessment data over students’ lifetimes. Macro-analytics will become increasingly real-time, incorporating more data from the finer-granularity meso/micro levels, and could conceivably benefit from benchmarking and data integration methodologies developed in non-educational sectors (although see below for concerns about the dangers of decontextualized data and the educational paradigms they implicitly perpetuate).
  • Meso-level analytics operate at institutional level. To the extent that educational institutions share common business processes to sectors already benefitting from Business Intelligence (BI) methods and technologies, they can be seen as a new BI market sector, who can usefully appropriate tools to integrate data silos in enterprise warehouses, optimize workflows, generate dashboards, mine unstructured data, better predict ‘customer churn’ and future markets, and so forth. It is the BI imperative to optimise business processes that partly motivates efforts to build institutional-level “academic analytics”, and we see communities of practice specifically for BI within educational organisations, which have their own cultures and legacy technologies.
  • Micro-level analytics support the tracking and interpretation of process-level data for individual learners (and by extension, groups). This data is of primary interest to learners themselves, and those responsible for their success, since it can provide the finest level of detail, ideally as rapidly as possible. This data is correspondingly the most personal, since (depending on platforms) it can disclose online activity click-by-click, physical activity such as geolocation, library loans, purchases, and interpersonal data such as social networks. Researchers are adapting techniques from fields including serious gaming, automated marking, educational data mining, computer-supported collaborative learning, recommender systems, intelligent tutoring systems/adaptive hypermedia, information visualization, computational linguistics and argumentation, and social network analysis.

As the figure shows, what we now see taking place is the integration of, and mutual enrichment between, these layers. Company mergers and partnerships show business intelligence products and enterprise analytics capacity from the corporate world being integrated with course delivery and social learning platforms that track micro-level user activity. The aggregation of thousands of learners’ interaction histories across cohorts, temporal periods, institutions, regions and countries creates meso + macro level analytics with an unprecedented level of fine-grained process data (Scenario: comparing similar courses across institutions for the quality of online discourse in final year politics students). In turn, the creation of such large datasets begins to make possible the identification and validation of patterns that may be robust across the idiosyncrasies of specific contexts. In other words, the breadth and depth at the macro + meso levels add power to micro-analytics (Scenario: better predictive models and feedback to learners, because statistically, one may have greater confidence in the predictive power of key learner behaviours when they have been validated against a nationally aggregated dataset, than from an isolated institution).

Example: Bryk reported that their Statway developmental mathematics initiative can triple the success rate of current programmes, in half the time. However, the next step is not merely to promote its success, publish, hope others pick it up, and move on to next thing. Bryk emphasised the need to look at the variation, and ask why did one school fail dismally? What can we learn? It turned out that success was dependent on the presence of certain kinds of staff. In Improvement Science, “failure is a treasure”. That’s counter-cultural to most kinds of research where one always hopes for success, and requires a bigger frame of reference which values the understanding of contextual variables, and expects failure.

What I think we see with Bryk’s work on the DEED methodology is a mechanism by which we can build knowledge about how the micro/meso/macro layers of an educational system interact — the arrows in the figure. Since local context matters, micro-level results should be passed ‘up’ the levels in order to pool data, detect patterns, and interpret why things are breaking/working, in order to then make more effective interventions back ‘down’ in local contexts. The data explosion coming from the new kinds of micro-level learning analytics must be escalated and interrogated for higher order systemic learning, so that successful analytics interventions can be adapted and replicated for other contexts.

Seeing the system

Central to Bryk & Gomez’s conception of a NIC are shared representations, which help orient the collective to the nature and scope of the problem, candidate solutions, and criteria for success. Essentially, we’re talking about maps that help people know which piece of the jigsaw they are working on. As a collective builds common ground in language and terminology, they may be able to map the system in a way that serves as a common reference point (a boundary object in Leigh Star’s terms). One example would be:

Screen Shot 2014-05-29 at 19.07.45

In a collective intelligence NIC platform designed to support the emergence of a community aligning themselves to such a map, we would then expect that these maps can serve as navigational aids around the knowledge space:

Screen Shot 2014-05-29 at 19.08.30

This is scaffolded, for instance, in the Evidence Hub platform, e.g. click on this image to see how the Hub’s building blocks (Issues, Claims, Supporting and Challenging Evidence, People, Organizations, Projects) interconnect around a given central theme:

Screen Shot 2014-05-29 at 19.08.43

or are being tackled by location of Project and Organization:

Screen Shot 2014-05-29 at 19.08.58

As the NIC builds its knowledge, one wants to know the state of the debate, and open issues, e.g. What evidence-based claims can we make? In what context does this approach work? Who is working on this problem in a Muslim context? etc. A NIC platform should serve as an analytics hub, generating views from the aggregated data flowing to it from the many local experiments. Two examples are a Knowledge Tree and an Argument Map:

Screen Shot 2014-05-29 at 19.20.56

Screen Shot 2014-05-29 at 19.18.14

The DEED methodology introduces educational leaders to some of the most common problem structuring representations in business analysis, such as Fishbone (Ishikawa) Diagrams and Driver Diagrams.

In this method, the Fishbone is used to map how the team is defining the system to be improved, e.g.

Screen Shot 2014-05-27 at 18.21.56

Systems thinkers and engineers do of course bring a well tested armoury of representational schemes and support tools to the task of evolving a picture of the system in a participatory way. Bryk has simply found the ones shown here to be simple and effective when working with educators, but I doubt he would exclude the relevance of other schemes.

Mapping the drivers

Focal areas of such a system picture are then selected for intervention, based on the best available knowledge of what drives a desired Aim:

Screen Shot 2014-05-27 at 18.22.44

For instance, there is an Alpha Lab NIC targeting Productive Persistence in student mathematics, using the following driver diagram:


This is itself a distillation of a significant, complex research literature (identifying many variables from many survey tools) into a “Practical Theory” that practitioners can work with. Expanded slightly, it looks like this, showing candidate interventions to be tried, and the sources of evidence underpinning them:

Screen Shot 2014-05-27 at 18.28.37

Zooming in on the right hand Change Ideas column, we see candidate interventions:

Screen Shot 2014-05-27 at 18.35.55

Within the Open University, we are developing a similar approach to justifying why we think an intervention will pay off, and how it will be tracked. In the figure below, a given row in the matrix represents a student experience intervention, and the columns specified a range of metadata which included: data sources required, time windows for expected impact, who was responsible, and the behavioural measures which would be tracked in order to evidence impact (or lack thereof). As shown in the figure below, one would want the Rationale and Outcome cells in the matrix to have some backing stronger than a hunch. They could link out to a living document of some sort where we build our collective understanding of what works, what doesn’t, and why we believe this.

Screen Shot 2014-05-29 at 18.42.54

The Hub could take many forms, from an internal spreadsheet/wiki, to a Driver Diagram, possibly organized in a purpose designed knowledge-building platform like the Evidence Hub, or its descendant the Impact Map.

So, the progress we are making here, is to encourage the representation of the working theory about why certain interventions (Change Ideas) may have an impact on the desired learning behaviour. Once Change Ideas are coupled with one or more learning analytics, one has created a rapid feedback loop. This is essentially a methodology and design rationale for the selection and orchestration of analytics based on the strongest practitioner and scientific evidence available at the time to that team: this is their local collective intelligence, incomplete or possibly even wrong to start with, but being refined by being passed to higher levels of sensemaking in the NIC, perhaps borrowing from and adapting other teams’ theories: a broader, deeper form of collective intelligence.

Analytics-powered Driver Diagrams: Perimeta System Models

We have been having an extraordinarily fruitful collaboration with colleagues in the University of Bristol Systems Centre, who recognise the pivotal role that a  disposition to learn has in the design of solutions to multi-stakeholder, wicked, socio-technical problems. They have developed a methodology, algorithm and support tool called Perimeta for modeling complex systems, in the explicit recognition that uncertainty is inherent in decision-making. However, it is vital  (1) to see both supporting and challenging evidence of progress, and (2) it also pays to know what one doesn’t know.

As detailed in a Learning Emergence technical report, in which the approach was piloted in a schools context [pdf], of the many systems thinking approaches available, one of the most appropriate for supporting collaborative development and leadership decisioning in complex systems such as learning communities, is hierarchical process modeling, which has three important characteristics:

  • Visual/ effective reporting of complex ideas and information is enhanced using hierarchical mapping of processes and an ‘Italian Flag’ model of evidence;
  • Assimilating all forms of evidence – data, prediction and opinion; and
  • Facilitating access to key information required for informed discussion, innovation and agreement.

Perimeta supports collaborative development of solutions to complex problems by providing a highly visual interface for understanding complex cause-and-effect and complex evidence. Perimeta can be described as:

  • a learning analytic designed to model diverse and complex processes
  • driven by stakeholder purpose
  • capable of dealing with hard, soft and narrative data in evidence of success, failure and ‘what we don’t know’
  • a visual environment for sense-making
  • a framework for self-evaluation and dialogue

The key point to make is that this hierarchical process model is essentially a Driver Diagram in terms of Bryk’s work, a working theory of what factors contribute to desired outcomes:

Screen Shot 2014-05-29 at 18.46.47

The difference is that this Driver Diagram is ‘executable’, since HPM provides a way to aggregate different kinds of evidence being gathered at the ‘leaves’ of the branches, resulting in a kind of analytics ‘dashboard’:

Screen Shot 2014-05-28 at 11.28.06

Recognising the uncertainty inherent in most data, the Perimeta model adopts an ‘Italian Flag’ visual to represent the quality of all of the evidence and consisting of:

  • ‘Green’ representing the strength of positive evidence
  • ‘Red’ representing the strength of negative evidence
  • ‘White’ representing lack of evidence, or uncertainty (the ‘white space’ awaiting exploration)

The evidence can be sourced from many places, but must be mapped into a weighting table. For instance, to map from responses to a Likert scale survey tool, HPM uses the following:

Screen Shot 2014-05-28 at 11.30.59

So to conclude an extremely fruitful collision of research programmes, two points:

We can envisage combining Driver Diagrams sourced from the literature (cf. the Productive Persistence figure), with DDs sourced from staff practitioner knowledge about local conditions, in order to design analytics which contribute to a system-wide Perimeta model, which is used to monitor the health of the system as a whole.

Hierarchical process models such as the above provide a way to create a more wholistic set of analytics: a way to quantify the wider range of educational outcomes that institutions value, adding a systems-level view to the many kinds of micro-level analytics now being developed.

The agenda to develop a wholistic conception of the learner and citizen, and analytics fit for such a purpose, is now building momentum as a wider network of people connect with each other. I’d recommend the ongoing series of Reinvent the University for the Whole Person video roundtables as a great way to tune in. . .

Screen Shot 2014-06-06 at 10.33.08


Learning Dispositions + Authentic Inquiry in a Primary School

What happens when you turn a curriculum topic over to 10-11 year old children,  give them freedom to choose their focus, and increasing autonomy to make their own decisions to design, create and run a showcase event? Indeed, how do staff cope with stepping back like this? If Ofsted inspectors were to walk in, how could the school evidence learning? How can you evidence the development of lifelong learning dispositions, and how does this relate to the school’s strategic concerns about the progress of different pupil groups on traditional attainment measures? What roles do social learning tools like reflective blogging have to play?

This movie provides a brief glimpse into a two year series of pilots at Bushfield School, documented in more detail in this report. It represents the convergence of both University of Bristol and Open University research and development into learning analytics that can evidence processes associated with deeper learning, especially dispositional analytics (learn more: replay talk / workshop).

(See Reports for the entire library of school case studies.)

Screen Shot 2014-04-25 at 15.43.46Small, T., Shafi, A. and Huang, S. (2014) Learning Power and Authentic Inquiry in the English Primary Curriculum: A Case Study, Report No. 12, ViTaL Development & Research Programme, University of Bristol. [pdf]

This report documents progress in a two-year action-research programme at Bushfield School, Milton Keynes, with two main purposes: firstly, to build on the School’s success in developing children’s capacity to learn; secondly, to track and measure the impact of its interventions for this purpose. The school combined the Effective Lifelong Learning Inventory (ELLI) with the Authentic Inquiry learning methodology from University of Bristol. Qualitative and quantitative data are combined to examine the impact of the pilots from the perspective of staff and pupils, comparing learning power against a range of demographic and attainment datasets, in the distinctive context of a primary school already experienced in the Building Learning Power approach.

DEED+ELLI+AI+CI = Systemic School Learning

Oasis Academy Jhon Williams LogoIntroduction

We’re excited to report the unfolding story about how we are using the groundbreaking work of Tony Bryk‘s team on  Design, Educational Engineering and Development (DEED), as a methodology for systemic school change. This is combined with the University of Bristol’s Effective Lifelong Learning Inventory (ELLI), which forms part of a process of Authentic Inquiry (AI) for students, teachers and leaders. The insights from these prototypes are then shared via a novel website, from the Open University’s Knowledge Media Institute, for harnessing Collective Intelligence (CI), called the Evidence Hub. This is  an exciting convergence, since as you are about to see, the school piloting this reports that it has catalysed a profound shift in how they think about professional development.

On 15th July 2013  the Centre for Systems Learning and Leadership at the University of Bristol held a seminar with teachers and leaders from Oasis Academy John Williams  who had identified student engagement in learning as a complex problem which they wanted to get to grips with and improve. They formed a Networked Improvement Community with colleagues from the University (NICs are a powerful concept developed by Doug Engelbart, whose work has since been applied by Tony Bryk – see below).

They told an exciting story about an experiment to engage a cohort of middle and senior teachers in their own accredited professional enquiries into student engagement through rapid prototyping: test fast – fail fast and early – learn and improve.  Each of the seven teachers gained 20 credits from the MSc in Systems Learning and Leadership through their enquiry – through collaborative seminars held in school and at the University.

Researcher’s Viewpoint

RDCRuth Deakin Crick introduced the project with an overview of the key ideas:

  • Learners are themselves a ‘complex system’
  • Deep learning (by students; teachers; leaders; organisations) is a journey from purpose to performance, which can be scaffolded by an authentic enquiry methodology
  • Aligning professional learning to organisational purpose at several levels in schools as complex living systems provides a rich architecture for improvement.

These  ideas have been drawn from the Carnegie Foundation for the Advancement of Teacher Education, particularly this paper Getting Ideas into Action: Building Networked Improvement Communities in Education by Tony Bryk and colleagues. This is combined with University of Bristol’s own research into Learning Power (as quantified by ELLI) and the pedagogy and methodology of Authentic Enquiry.

Leader’s viewpoint

Screen shot 2013-07-17 at 19.15.05Rebecca Clark (Executive Principal & Regional Academies Director) provided the context for the research  in terms of the historical, social, and economic factors which shape the community’s expectations in education, and the continuing journey of change which the school is on.  She share insights gained from her own study and the project and how these may be applied, and developed further in the wider strategic setting of the family of Oasis Academies.


SBSSimon Buckingham Shum, Professor at the Open University and visiting fellow at Bristol, described the way in which the project is harvesting the learning from these seven enquiries and making it available not only to colleagues in their own school, but globally, through the Evidence Hub for Systems Learning and Leadership. This is a site you can explore, and we encourage you to sign up to subscribe to alerts, and begin sharing your own insights. [Evidence Hub research paper]


Phil  (Assistant Principal) and Richard are two teachers at the Academy who developed their own authentic enquiries. They are  at different stages in their careers and teach different subjects.  They explained how the project has changed them as teachers, as well as their practice, and has had an impact on the engagement of their students.

Phil’s Story

Screen shot 2013-07-17 at 19.21.18Phil’s talk about his enquiry about how to engage students in a top Year 8 Maths set by ‘handing over responsibility to students’ and helping them to develop their strategic awareness.

Simon then explored Phil’s story distilled on the Evidence Hub [view on Hub]:

Richard’s Story

Richard JamesRichard’s enquiry focused the development of resilience with his  to set Science class – challenging them by setting problems which were unsolvable in order to understand that confusion and failure is all part of learning.

Simon then demo’d Richard’s story distilled on the Evidence Hub [view on Hub]:

As Phil and Richard’s stories are now tied to the Evidence Hub entry for Oasis Academies, when you view the Oasis homepage, you see their work:


This is just the beginning of the story for this Networked Improvement Community in Bristol, which will be continuing during the next academic year with the Centre for Systems Learning and Leadership.

We warmly invite you to add comments below, or join the reflective conversation on the Evidence Hub if you have issues to raise, or evidence-based claims/solutions to share.

Dispositional, Complex Systems Analytics @ LASI


SkypeScreenSnapz010We’ve just got back from an intensely busy, creative and enjoyable week at LASI13, Stanford University, where Ruth Deakin Crick, Chris Goldspink, Rebecca Ferguson, Nelson Gonzalez and I (online!) contributed to the programme. Ruth and Rebecca were both blogging, so you can get their in-situ reflections, and I’m sure there’ll be more to follow as we unpack the many ideas.

SkypeScreenSnapz007Following a successful Dispositional Learning Analytics workshop, on which we’ve had very positive feedback, awareness should grow of Learner Dispositions (or Mindset as Dweck calls it), Teacher Dispositions, the potential of a multi-level Complex Systems approach to educational system design, and from Nelson’s work at Declara, the emerging potential of applying machine learning techniques to such datasets.


Ruth and Rebecca also participated in a panel on analytics for “21st century skills”, in which they introduced the wider audience to the learning power framework, a systems approach, and a particular implementation of these ideas as visual analytics within EnquiryBlogger. (Just in: we have a fresh set of data from a very exciting deployment of these ideas in a primary school over the last month, which we look forward to reporting on!)

Microsoft WordScreenSnapz083All the morning sessions from Stanford are, or will soon be replayable, and Simon coordinated a global network of LASI-Local institutes, and the online participants, whose blogs are automatically gathered in the LASI-Aggregator, and provide a rich account of a momentous week.

lasi-groupImmediate thoughts on next steps were pooled from Stanford and global online participants in a Google Doc, to give you a feel for where people felt they’d got to by the end of Friday. Ruth documented and reported back on her breakout group’s discussion.

Watch the Society for Learning Analytics Research for news of where next, and get your head going for how you might contribute to LAK14 conference next Spring…