Ruth and I have the privilege of working with Randy Bass [blog] and team at Georgetown University. Randy is a leading thinker around the deep purpose of higher education, and how this entails rethinking student qualities, and analytics.
Our calling as a university is the formation of men and women (but many institutions do this of course). However, we do so in the context of a community of enquiry and knowledge creation (fewer institutions do this). Moreover, we do so for the public, common good (fewer still have this explicit mission). These three are interlocked and inseparable.
The railroad companies who thought they were in the business of railroads went bust. The ones who thrived understood they were in the transportation business.
What’s our equivalent?
Let’s call it Formation. Or Transformation. Or Integration.
But if we think we’re in the business of Content, Skills or Information Transfer, then by 2030, we’re going to have a LOT of competition.
…or, as we might say, Dead In The Water.
His Formation by Design (FxD) initiative is defining the contours of this new landscape, and their progress report is an inspiring read (disclosure: it includes material from our contributions to a symposium last June). Or check out the video roundtable discussion series he hosted called Reinvent University for the Whole Person. He was also on the team of (what I think is) the largest national ePortfolio initiative in higher education, a reflection of the importance being placed on reflection for transformational learning.
Randy and team: all power to you as we figure out together how we redefine our calling, to help students find theirs. Along the way, lets reinvent the environments and metrics that will constitute the new evidence base in 2030 🙂
The critical stance of my keynote there seemed to resonate with delegates, who hear a lot about “Big Data” and analytics, but have reservations about the kinds of learning that such technologies may perpetuate. I sought to deconstruct analytics to clarify the ways in which an approach and how it is used embodies an educational worldview. Knowing this, what kinds of learners are needed for 21st century society, and what role can analytics play in advancing this mission?
Part of this emerging picture is what we’re focusing on here at LearningEmergence.net — redefining metrics that value qualities in the learner that many are talking about, but which are hard to evidence.
Abstract: Education is about to experience a data tsunami from online trace data (VLEs; MOOCs; Quantified Self) integrated with conventional educational datasets. This requires new kinds of analytics to make sense of this new resource, which in turn asks us to reflect deeply on what kinds of learning we value. We can choose to know more than ever about learners and teachers, but like any modelling technology or accounting system, analytics do not passively describe sociotechnical reality: they begin to shape it. What realities do we want analytics to perpetuate, or bring into being? Can we talk about analytics in the same breath as the deepest values that a wholistic educational experience should nurture? Could analytics become an ally for those who want to shift assessment regimes towards valuing the qualities that many now regard as critical to thriving in the ‘age of complexity’?
Personal reflections on 2 workshops and a lecture with Tony Bryk (Carnegie Foundation for the Advancement of Teaching), hosted last week by Ruth Deakin Crick at University of Bristol. What follows after a brief introduction to the concept of NICs, are my thoughts on the intersection of NICs with Learning Analytics. I made a number of connection points between the features of the DEED+NIC approach, and learning analytics, which I’ll highlight in green.
The ideas of the human-centred computing pioneer Douglas Engelbart (dougengelbart.org) run like DNA through my work, I find so much depth of insight [see his Afterword to my book]. Doug showed the world in the 1960s many of the features that we now take for granted in our personal computing: the mouse, windows, hyperlinks, videoconferencing, direct editing of text on screen.
However, his work on making computers more intuitive as personal tools for thought was just part of his bigger vision for improving what he called our Collective IQ — humanity’s capacity to tackle “the complex, urgent problems” we face by working more effectively together.
“A” represents how the organization or community goes about its core business or mission; “B” represents the process by which it improves its core business activity (through the efforts of individuals and improvement communities); an Improvement Alliance is a “C” activity; “C” is any activity that improves “B” activity. By definition, improvement communities operate at the B and C levels. Conversely, any time more than one person is involved in a B or C activity, it’s an improvement community. An important function of “C” is to network improvement communities within and across organizations, forming a C level improvement community, aka “C Community” or “Improvement Alliance” of representative stakeholders from a variety of B activities. Organizations can also join forces at the C level to create a more robust C function, forming a super Improvement Alliance.
Many people have explored and trialled this concept, experimenting with a range of technologies and ways of working that are designed to make evidence-based advances on complex problems. Eductional examples of particular relevance include the Carnegie Foundation’s DEED methodology and Alpha Labs, University of Bristol’s dispositional analytics research programme, the Learning Emergence network and its Evidence Hub, and the many Collaboratories for distributed research communities.
Tony Bryk used this alternative figure to show how “B” improvement clusters seeking to improve frontline “A” activities can themselves network to create a level “C” NIC:
Bryk and Gomez have documented the rationale behind their educational improvement science strategy in detail [pdf]. Their concept of an educational NIC cannot be applied to any collective, but comes with some distinctive features which I summarized as follows in the Edu-NICs workshop I ran:
Scavenging from healthcare improvement science
In his workshop and public lecture, Tony Bryk described how he has ‘scavenged’ as much as he can from the healthcare profession’s adoption of improvement science, which apparently about 25 years ago was where education stands today. It turns out that it has taken two and half decades’ concerted effort by the US Institute of Health Improvement (IHI) to establish a new professional discipline, working on translating innovation into scaleable practice. Healthcare shares with education a very similar gulf between academic scientific research and its reward systems, and the translation of insights into scaleable practice on the frontline.
Bryk pointed us to the work of Atul Gawande, who concluded his TED talk (18:10):
“Making systems work” is the great task of our generation for health, education, climate change, poverty…
This vignette from a UK Health Foundation movie shows how the concepts such as practice-based learning, collective intelligence and evidence-informed practice are now embedding, although, of course, nobody is declaring “mission accomplished”. Swap the words maternity care/hospitals/doctors/patients with education/universities/educators/learners — and it still all makes sense:
Bryk’s call to action is that within education, there is precious little of this systematic, systemic, intentional improvement methodology to be found. Education is still stuck where healthcare was, with a fixation on the scientific paradigm for truth, grounded in randomized controlled trials. Instead a new methodological paradigm is needed whose core question is not simply What works? in an isolated context, but How do we replicate and scale what works? across contexts. This is not because we can hope for ‘one size fits all’ solutions — quite the opposite — because we understand how important certain contextual factors are to the embedding of that innovation:
Analytics implication? By extension, the challenge of improving education applies to learning analytics (which are after all, new kinds of tools for supporting different kinds of pedagogy and assessment). Learning analytics faces the same challenge of bridging the gulf between academic research and frontline practice, and generalizing findings. As success and failure stories in the field emerge, there is exactly the same need to try and understand the contributing contextual variables. A distinguishing feature may be that learning analytics contains the seeds for its own success, in this regard, since computational and statistical approaches to identifying the most predictive variables from large datasets could be used to advance the field’s own Level C learning — not just the learning of the students being tracked at Levels A and B.
Implications for ICT
Moving towards thinking about opportunities for ICT to add value, I summarised a set of functional roles as follows:
It is no coincidence that the above defines a socio-technical infrastructure not only for professionals seeking to advance their field, but also for scaffolding students in authentic, collaborative inquiry. Given the challenges we face, at many societal scales, we need to train the next generation more effectively to design inquiries, make sense of complex, heterogeneous scientific and practitioner data, from multiple perspectives and epistemic traditions, via a diversity of human and computational tools, as well as learning the skills of collaborative knowledge negotiation and community facilitation in the role of ‘hub’ catalysts.
I then stepped through this cycle, as detailed in these slides:
The remainder of this note focuses on the role of analytics.
Implications for Learning Analytics
Understanding the interplay between different levels in complex systems
In a special issue devoted to complexity science, social science and computation, colleagues documented the frontline challenges that need to be tackled in modelling complex social systems, among them, multilevel dynamics: how different levels, and systems of systems, influence each other. For computational social scientists seeking to simulate a social system formally in order to understand its structure and dynamics, this is a basic research frontier. We are not so ambitious as to want to simulate the social richness of schools or courses, but the challenge of understanding how the macro and micro shape each other is at the heart of the difficulty of educational reform, and the challenge of creating what Bryk calls “practical theories and methods” which are robust enough to make the journey from academia to the front line, negotiating all the constraints of politics and practice on the way.
The learning analytics community recognizes the different levels of data and analytics that are now in play within educational systems, but has no good accounts yet of how these influence each other. George Siemens and Phil Long introduced this diagram to distinguish learning analytics that attend to fine-grained patterns in learner behavior from academic analytics that focus on the more static demographics and periodic course outcomes of interest to strategic decision makers in institutions:
In my own attempt to summarise the levels, I used micro/meso/macro terminology, and hinted at how the levels may start to inform each other:
Macro-level analytics seek to enable cross-institutional analytics, for instance, through ‘maturity’ surveys of current institutional practices or improving state-wide data access to standardized assessment data over students’ lifetimes. Macro-analytics will become increasingly real-time, incorporating more data from the finer-granularity meso/micro levels, and could conceivably benefit from benchmarking and data integration methodologies developed in non-educational sectors (although see below for concerns about the dangers of decontextualized data and the educational paradigms they implicitly perpetuate).
Meso-levelanalytics operate at institutional level. To the extent that educational institutions share common business processes to sectors already benefitting from Business Intelligence (BI) methods and technologies, they can be seen as a new BI market sector, who can usefully appropriate tools to integrate data silos in enterprise warehouses, optimize workflows, generate dashboards, mine unstructured data, better predict ‘customer churn’ and future markets, and so forth. It is the BI imperative to optimise business processes that partly motivates efforts to build institutional-level “academic analytics”, and we see communities of practice specifically for BI within educational organisations, which have their own cultures and legacy technologies.
Micro-level analytics support the tracking and interpretation of process-level data for individual learners (and by extension, groups). This data is of primary interest to learners themselves, and those responsible for their success, since it can provide the finest level of detail, ideally as rapidly as possible. This data is correspondingly the most personal, since (depending on platforms) it can disclose online activity click-by-click, physical activity such as geolocation, library loans, purchases, and interpersonal data such as social networks. Researchers are adapting techniques from fields including serious gaming, automated marking, educational data mining, computer-supported collaborative learning, recommender systems, intelligent tutoring systems/adaptive hypermedia, information visualization, computational linguistics and argumentation, and social network analysis.
As the figure shows, what we now see taking place is the integration of, and mutual enrichment between, these layers. Company mergers and partnerships show business intelligence products and enterprise analytics capacity from the corporate world being integrated with course delivery and social learning platforms that track micro-level user activity. The aggregation of thousands of learners’ interaction histories across cohorts, temporal periods, institutions, regions and countries creates meso + macro level analytics with an unprecedented level of fine-grained process data (Scenario: comparing similar courses across institutions for the quality of online discourse in final year politics students). In turn, the creation of such large datasets begins to make possible the identification and validation of patterns that may be robust across the idiosyncrasies of specific contexts. In other words, the breadth and depth at the macro + meso levels add power to micro-analytics (Scenario: better predictive models and feedback to learners, because statistically, one may have greater confidence in the predictive power of key learner behaviours when they have been validated against a nationally aggregated dataset, than from an isolated institution).
Example: Bryk reported that their Statway developmental mathematics initiative can triple the success rate of current programmes, in half the time. However, the next step is not merely to promote its success, publish, hope others pick it up, and move on to next thing. Bryk emphasised the need to look at the variation, and ask why did one school fail dismally? What can we learn? It turned out that success was dependent on the presence of certain kinds of staff. In Improvement Science, “failure is a treasure”. That’s counter-cultural to most kinds of research where one always hopes for success, and requires a bigger frame of reference which values the understanding of contextual variables, and expects failure.
What I think we see with Bryk’s work on the DEED methodology is a mechanism by which we can build knowledge about how the micro/meso/macro layers of an educational system interact — the arrows in the figure. Since local context matters, micro-level results should be passed ‘up’ the levels in order to pool data, detect patterns, and interpret why things are breaking/working, in order to then make more effective interventions back ‘down’ in local contexts. The data explosion coming from the new kinds of micro-level learning analytics must be escalated and interrogated for higher order systemic learning, so that successful analytics interventions can be adapted and replicated for other contexts.
Seeing the system
Central to Bryk & Gomez’s conception of a NIC are shared representations, which help orient the collective to the nature and scope of the problem, candidate solutions, and criteria for success. Essentially, we’re talking about maps that help people know which piece of the jigsaw they are working on. As a collective builds common ground in language and terminology, they may be able to map the system in a way that serves as a common reference point (a boundary object in Leigh Star’s terms). One example would be:
In a collective intelligence NIC platform designed to support the emergence of a community aligning themselves to such a map, we would then expect that these maps can serve as navigational aids around the knowledge space:
This is scaffolded, for instance, in the Evidence Hub platform, e.g. click on this image to see how the Hub’s building blocks (Issues, Claims, Supporting and Challenging Evidence, People, Organizations, Projects)interconnect around a given central theme:
As the NIC builds its knowledge, one wants to know the state of the debate, and open issues, e.g. What evidence-based claims can we make? In what context does this approach work? Who is working on this problem in a Muslim context? etc. A NIC platform should serve as an analytics hub, generating views from the aggregated data flowing to it from the many local experiments. Two examples are a Knowledge Tree and an Argument Map:
The DEED methodology introduces educational leaders to some of the most common problem structuring representations in business analysis, such as Fishbone (Ishikawa) Diagrams and Driver Diagrams.
In this method, the Fishbone is used to map how the team is defining the system to be improved, e.g.
Systems thinkers and engineers do of course bring a well tested armoury of representational schemes and support tools to the task of evolving a picture of the system in a participatory way. Bryk has simply found the ones shown here to be simple and effective when working with educators, but I doubt he would exclude the relevance of other schemes.
Mapping the drivers
Focal areas of such a system picture are then selected for intervention, based on the best available knowledge of what drives a desired Aim:
For instance, there is an Alpha Lab NIC targeting Productive Persistence in student mathematics, using the following driver diagram:
This is itself a distillation of a significant, complex research literature (identifying many variables from many survey tools) into a “Practical Theory” that practitioners can work with. Expanded slightly, it looks like this, showing candidate interventions to be tried, and the sources of evidence underpinning them:
Zooming in on the right hand Change Ideas column, we see candidate interventions:
Within the Open University, we are developing a similar approach to justifying why we think an intervention will pay off, and how it will be tracked. In the figure below, a given row in the matrix represents a student experience intervention, and the columns specified a range of metadata which included: data sources required, time windows for expected impact, who was responsible, and the behavioural measures which would be tracked in order to evidence impact (or lack thereof). As shown in the figure below, one would want the Rationale and Outcome cells in the matrix to have some backing stronger than a hunch. They could link out to a living document of some sort where we build our collective understanding of what works, what doesn’t, and why we believe this.
The Hub could take many forms, from an internal spreadsheet/wiki, to a Driver Diagram, possibly organized in a purpose designed knowledge-building platform like the Evidence Hub, or its descendant the Impact Map.
So, the progress we are making here, is to encourage the representation of the working theory about why certain interventions (Change Ideas) may have an impact on the desired learning behaviour. Once Change Ideas are coupled with one or more learning analytics, one has created a rapid feedback loop. This is essentially a methodology and design rationale for the selection and orchestration of analytics based on the strongest practitioner and scientific evidence available at the time to that team: this is their local collective intelligence, incomplete or possibly even wrong to start with, but being refined by being passed to higher levels of sensemaking in the NIC, perhaps borrowing from and adapting other teams’ theories: a broader, deeper form of collective intelligence.
Analytics-powered Driver Diagrams: Perimeta System Models
We have been having an extraordinarily fruitful collaboration with colleagues in the University of Bristol Systems Centre, who recognise the pivotal role that a disposition to learn has in the design of solutions to multi-stakeholder, wicked, socio-technical problems. They have developed a methodology, algorithm and support tool called Perimeta for modeling complex systems, in the explicit recognition that uncertainty is inherent in decision-making. However, it is vital (1) to see both supporting and challenging evidence of progress, and (2) it also pays to know what one doesn’t know.
As detailed in a Learning Emergence technical report, in which the approach was piloted in a schools context [pdf], of the many systems thinking approaches available, one of the most appropriate for supporting collaborative development and leadership decisioning in complex systems such as learning communities, is hierarchical process modeling, which has three important characteristics:
Visual/ effective reporting of complex ideas and information is enhanced using hierarchical mapping of processes and an ‘Italian Flag’ model of evidence;
Assimilating all forms of evidence – data, prediction and opinion; and
Facilitating access to key information required for informed discussion, innovation and agreement.
Perimeta supports collaborative development of solutions to complex problems by providing a highly visual interface for understanding complex cause-and-effect and complex evidence. Perimeta can be described as:
a learning analytic designed to model diverse and complex processes
driven by stakeholder purpose
capable of dealing with hard, soft and narrative data in evidence of success, failure and ‘what we don’t know’
a visual environment for sense-making
a framework for self-evaluation and dialogue
The key point to make is that this hierarchical process model is essentially a Driver Diagram in terms of Bryk’s work, a working theory of what factors contribute to desired outcomes:
The difference is that this Driver Diagram is ‘executable’, since HPM provides a way to aggregate different kinds of evidence being gathered at the ‘leaves’ of the branches, resulting in a kind of analytics ‘dashboard’:
Recognising the uncertainty inherent in most data, the Perimeta model adopts an ‘Italian Flag’ visual to represent the quality of all of the evidence and consisting of:
‘Green’ representing the strength of positive evidence
‘Red’ representing the strength of negative evidence
‘White’ representing lack of evidence, or uncertainty (the ‘white space’ awaiting exploration)
The evidence can be sourced from many places, but must be mapped into a weighting table. For instance, to map from responses to a Likert scale survey tool, HPM uses the following:
So to conclude an extremely fruitful collision of research programmes, two points:
We can envisage combining Driver Diagrams sourced from the literature (cf. the Productive Persistence figure), with DDs sourced from staff practitioner knowledge about local conditions, in order to design analytics which contribute to a system-wide Perimeta model, which is used to monitor the health of the system as a whole.
Hierarchical process models such as the above provide a way to create a more wholistic set of analytics: a way to quantify the wider range of educational outcomes that institutions value, adding a systems-level view to the many kinds of micro-level analytics now being developed.
The agenda to develop a wholistic conception of the learner and citizen, and analytics fit for such a purpose, is now building momentum as a wider network of people connect with each other. I’d recommend the ongoing series of Reinvent the University for the Whole Person video roundtables as a great way to tune in. . .
How might we, from scratch, design digital platforms to model multiple data streams from multiple sources in a generalized ecosystem of learning to make predictions about learning based on changes to instruction? We envision MOORs as digital terrains traversed by learners across formal and informal education (e.g., schooling, museums, the internet), and across the lifespan.
Slides from my intro talk, which connected Ruth and Chris’s research at Bristol University and Incept Labs, with emerging concepts of the future learner’s personal data cloud, in which I manage the release of my behavioural and somatic data to boost my learning analytics…