learning with insight
I agree entirely – this was actually the point of my expansion of SCRL into a browser. This would allow SCRL to examine and interact with all sorts of behaviours instead of just with the sensor tool data. It seemed like a useful first-pass at the concept, with future versions having SCRL as a standalone server-side program and a set of plugins for browsers, IDEs, and other student utilities. Does that make more sense in this context?
Well, this notion came out from my discussion with the Harvard researcher in Boston. Even if we embed regulation-activities along with studying activities, they are seen as two separate events. We wondered about the possibility of “truly embedded nature of regulation” – so ‘studying’ implies ‘studying with regulation’; there is no other option given to students; a kind of ‘strict regulatory method of studying’. Thoughts on this?
I’m going to put together a diagram to show what my intentions are with the new SCRL, it might be helpful. The merging process I’ve been thinking about would allow our self-regulation system to monitor regulatory behaviour as merged with competency behaviour; it’s up to you to decide whether that would be properly embedded or not, but I think it’s a good first step. I’m entirely on board with the end goal, though. Regulation behaviours are an intrinsic part of motivation and learning, not just an optional process.
As promised, I drew up a little diagram for my proposal to assess learning behaviours independent of domain-specific sensors. This would make SCRL a sensor in and of itself – a sensor which is sensitive to learning behaviours with a regulatory viewpoint. I apologize for the messiness and poor production quality – if needed I’ll make something a bit more professional. As it is, this is just some scribbles, but they should demonstrate the flow of events. Diagram is at the bottom, more detailed text description follows.
Starting with the learner interacting with a SCRL-enabled system, two data streams are created as the learner works. Sensor plugins generate sensor data, which are fed off to competency assessment systems in the same manner as SCRL 1.0. Additionally, however, user actions are recorded on a micro-level – eye tracking, mouse tracking, user inputs, etc, are captured and wrapped as incomplete learning events, which are sent to the SCRL server for temporary storage.
At the server, SCRL does preprocessing on these events. It decides if some events are extraneous or should not be kept (i.e. if sensitive information, or information which jeopardizes anonymity is received); it also organizes the incomplete learning events into clusters according to time and importance. Mouse events are gathered into vectors or splines, partitioned by click events, pauses, and scrolls, for example.
These incomplete events can be analyzed in their present state by comparison with learning behaviour ontologies of two types. First, they can be compared with production rule systems built around expert knowledge of behaviour and motivation; for example, they could notice erratic eye movement that frequently goes off-screen and deduce low motivation or high levels of distraction. This would include the prediction of which learning strategies have been chosen, and predicted effectiveness of these strategies.
Second, these incomplete events can be compared with previously-gained information about behaviours specific to this learner. For instance, the system may have learned that this learner doesn’t learn particularly well when studying on a weekday morning. These user-specific rules are generated in the processes following.
Once competency information has been received from competency assessment tools (SCALE, Mi-Writer, etc) they are unified with waiting, incomplete learning events. This unification can be done in a number of ways – I suggest that each learning event be given a weight according to estimated impact of that event, with weights determined by expert opinion and occasionally assessed by BBN when there is a tight link (short time difference) between event generation and competency assessment. Studies should also supplement this, in my opinion.
This unification results in a completed learning event – we have a context, an action, and an outcome. There are a number of analytics we can do to these sets of learning events:
– we can examine the effectiveness of chosen strategies and compare this assessment to the predicted assessment in preprocessing to help build an individualized set of learning strategies suited to the learner.
– we can assess and monitor relative levels of attentiveness to the various learning tasks and help the learner come up with better plans for learning.
– we can do direct comparison of strategies at the individual, classroom, and learning domain level.
– lots more – these are just off of the top of my head.
This system will create a lot of data, ideal for Big Data analysis. I imagine higher-speed production systems doing just-in-time analysis for feedback and reporting, while slower and more complex analysis ticks over in the background.
Monitoring self-regulation happens as follows:
– We don’t have a clear view of initial task perception. This is before the user encounters SCRL, generally, as this is when the learner is deciding on actions to take, not taking actions. We can infer that it happens, however, and may be able to make guesses as to task perception based on the learners’ first actions. For instance, if the learner starts their study session by looking at Wikipedia articles on the general topics they’re interested in, this might indicate a low familiarity with the topic, and that their task perception includes this lack of awareness.
– Planning can be monitored by browsing and reading behaviours, as we can interpret planning as the collection of information before beginning to enact a plan. This may intermingle with enacting, though, and is discussed below.
– Enacting can be monitored through application-specific websites and tools, such as Mi-Writer, Eclipse, etc, for the case of assignments. Some enacting may be reading in the form of study, however. Distinguishing Enacting and Planning in these cases will be difficult. I’ll suggest that they can be distinguished by the nature of specific learner behaviour – slow, comprehensive reading is study, whereas quick, brief reading may be interpreted as either planning or execution. If the only reading encountered is brief, we can’t distinguish them, but if there are stretches in which the learner reads comprehensively, we may interpret this as the execution of a “reading” strategy. I welcome criticism on this point, though – there are certainly problems with this!
– Adaptation of plans can be seen in a shift of activity without a termination of the learning system shortly thereafter. If the user stops their current work, does something different briefly, and then either resumes or starts a new activity in the same learning domain, we can assume that they have adapted their plans and are now somewhere else in the regulation process – assessing whether they have moved to the task perception, planning or execution stages will take further work.
That’s what I’ve got! Diagram is at the bottom. Thanks, and your opinion is welcome!
Excellent insights, Colin. I wish Prof Phil Winne himself could comment on future directions we could pursue.
Technology aside, let us look at the theory itself. Can we uniquely observe values of variables associated with self-regulation across multiple study activities of students, and model them causally? Some of the variables are:
Openness to change
Delaying learning gratification
Perceived learning task value
Comfort in Use of technology
Sensory input control
Physical location control
Shutting out competing stimuli
Performing longer term outlook
Multiple task rotation
Deliberate practice -towards expertise in specific sequence of skills
SRL/CRL – Interrelated web-of-skills
External locus of control
Emotional self- and co-regulation
Emotional self- and co-regulation
Amount of student thinking
Depth of student thinking
Conscious focus on learning
Self-estimation of performance
Confidence in self-estimation of performance
Internal locus of control
Pursuit of improvement
You must be logged in to post a comment.