New viewpoint for regulation and reflection

29 Jan
Wikis > Welcome to Wikinalytics! > New viewpoint for regulation and reflection

SCRL tends to keep regulation related activities ‘away’ from the studying related activities, as in, SCRL and Moodle or SCRL and SCALE, different interfaces, different types of interactions, and so on. We should think of ways in which SCRL can be ’embedded’ or become ‘seamless’ in the studying environment itself. In Moodle for the coding domain, a student may read chapters of an  ebook, use the embedded IDE (VPL) to solve short problems, discuss materials with friends and tutors, take quizzes, and so on – all of these are studying activities. In NetBeans+Codex, a student may solve problems.

Since SCRL and MI-Dash have independent representations, students would have to consciously move to a ‘separate’ environment, away from the studying environments (Moodle or NetBeans+Codex), to regulate using SCRL or reflect using MI-Dash. It is important to also maintain the independent representations/interfaces for MI-Dash and SCRL, it is also important to investigate possibilities of embedding/integrating SCRL and MI-Dash within the studying environment, in a highly contextual (e.g., SCRL identifies studying strategies specific to the studying task the student is currently undergoing in a particular Moodle page) and seamless (e.g., MI-Dash focuses only on competencies/confidence values related to the particular set of skills/strategies associated with the studying task) fashion.

How would we design our systems for such an ‘elastic’ environment where students can choose the degree of embeddedness of various systems?

5 thoughts on “New viewpoint for regulation and reflection

  1. I agree entirely – this was actually the point of my expansion of SCRL into a browser. This would allow SCRL to examine and interact with all sorts of behaviours instead of just with the sensor tool data. It seemed like a useful first-pass at the concept, with future versions having SCRL as a standalone server-side program and a set of plugins for browsers, IDEs, and other student utilities. Does that make more sense in this context?

  2. Well, this notion came out from my discussion with the Harvard researcher in Boston. Even if we embed regulation-activities along with studying activities, they are seen as two separate events. We wondered about the possibility of “truly embedded nature of regulation” – so ‘studying’ implies ‘studying with regulation’; there is no other option given to students; a kind of ‘strict regulatory method of studying’. Thoughts on this?‏

  3. I’m going to put together a diagram to show what my intentions are with the new SCRL, it might be helpful. The merging process I’ve been thinking about would allow our self-regulation system to monitor regulatory behaviour as merged with competency behaviour; it’s up to you to decide whether that would be properly embedded or not, but I think it’s a good first step. I’m entirely on board with the end goal, though. Regulation behaviours are an intrinsic part of motivation and learning, not just an optional process.

  4. As promised, I drew up a little diagram for my proposal to assess learning behaviours independent of domain-specific sensors. This would make SCRL a sensor in and of itself – a sensor which is sensitive to learning behaviours with a regulatory viewpoint. I apologize for the messiness and poor production quality – if needed I’ll make something a bit more professional. As it is, this is just some scribbles, but they should demonstrate the flow of events. Diagram is at the bottom, more detailed text description follows.

    Starting with the learner interacting with a SCRL-enabled system, two data streams are created as the learner works. Sensor plugins generate sensor data, which are fed off to competency assessment systems in the same manner as SCRL 1.0. Additionally, however, user actions are recorded on a micro-level – eye tracking, mouse tracking, user inputs, etc, are captured and wrapped as incomplete learning events, which are sent to the SCRL server for temporary storage.

    At the server, SCRL does preprocessing on these events. It decides if some events are extraneous or should not be kept (i.e. if sensitive information, or information which jeopardizes anonymity is received); it also organizes the incomplete learning events into clusters according to time and importance. Mouse events are gathered into vectors or splines, partitioned by click events, pauses, and scrolls, for example.

    These incomplete events can be analyzed in their present state by comparison with learning behaviour ontologies of two types. First, they can be compared with production rule systems built around expert knowledge of behaviour and motivation; for example, they could notice erratic eye movement that frequently goes off-screen and deduce low motivation or high levels of distraction. This would include the prediction of which learning strategies have been chosen, and predicted effectiveness of these strategies.

    Second, these incomplete events can be compared with previously-gained information about behaviours specific to this learner. For instance, the system may have learned that this learner doesn’t learn particularly well when studying on a weekday morning. These user-specific rules are generated in the processes following.

    Once competency information has been received from competency assessment tools (SCALE, Mi-Writer, etc) they are unified with waiting, incomplete learning events. This unification can be done in a number of ways – I suggest that each learning event be given a weight according to estimated impact of that event, with weights determined by expert opinion and occasionally assessed by BBN when there is a tight link (short time difference) between event generation and competency assessment. Studies should also supplement this, in my opinion.

    This unification results in a completed learning event – we have a context, an action, and an outcome. There are a number of analytics we can do to these sets of learning events:

    – we can examine the effectiveness of chosen strategies and compare this assessment to the predicted assessment in preprocessing to help build an individualized set of learning strategies suited to the learner.

    – we can assess and monitor relative levels of attentiveness to the various learning tasks and help the learner come up with better plans for learning.

    – we can do direct comparison of strategies at the individual, classroom, and learning domain level.

    – lots more – these are just off of the top of my head.

    This system will create a lot of data, ideal for Big Data analysis. I imagine higher-speed production systems doing just-in-time analysis for feedback and reporting, while slower and more complex analysis ticks over in the background.

    Monitoring self-regulation happens as follows:

    – We don’t have a clear view of initial task perception. This is before the user encounters SCRL, generally, as this is when the learner is deciding on actions to take, not taking actions. We can infer that it happens, however, and may be able to make guesses as to task perception based on the learners’ first actions. For instance, if the learner starts their study session by looking at Wikipedia articles on the general topics they’re interested in, this might indicate a low familiarity with the topic, and that their task perception includes this lack of awareness.

    – Planning can be monitored by browsing and reading behaviours, as we can interpret planning as the collection of information before beginning to enact a plan. This may intermingle with enacting, though, and is discussed below.

    – Enacting can be monitored through application-specific websites and tools, such as Mi-Writer, Eclipse, etc, for the case of assignments. Some enacting may be reading in the form of study, however. Distinguishing Enacting and Planning in these cases will be difficult. I’ll suggest that they can be distinguished by the nature of specific learner behaviour – slow, comprehensive reading is study, whereas quick, brief reading may be interpreted as either planning or execution. If the only reading encountered is brief, we can’t distinguish them, but if there are stretches in which the learner reads comprehensively, we may interpret this as the execution of a “reading” strategy. I welcome criticism on this point, though – there are certainly problems with this!

    – Adaptation of plans can be seen in a shift of activity without a termination of the learning system shortly thereafter. If the user stops their current work, does something different briefly, and then either resumes or starts a new activity in the same learning domain, we can assume that they have adapted their plans and are now somewhere else in the regulation process – assessing whether they have moved to the task perception, planning or execution stages will take further work.

    That’s what I’ve got! Diagram is at the bottom. Thanks, and your opinion is welcome!

    SCRL Diagram

    • Excellent insights, Colin. I wish Prof Phil Winne himself could comment on future directions we could pursue.

      Technology aside, let us look at the theory itself. Can we uniquely observe values of variables associated with self-regulation across multiple study activities of students, and model them causally? Some of the variables are:

      Study strategies
      Self-awareness
      Learning skills
      Self-assessment
      Lifelong learning
      Meta learning
      Intentional learning
      Independent learning
      Self-directed learning
      Strategic knowledge
      Cognitive tasks
      Self knowledge
      Total-engagement activity
      Concentration
      Introspection
      Openness to change
      Self discipline
      Responsible learning
      Social competency
      Delaying learning gratification
      Self control
      Perseverance
      Self-efficacy
      Intrinsic motivation
      Perceived learning task value
      Goal orientation
      Help seeking
      Rehearsal
      Elaboration
      Organization
      Emotional control
      Motivational control
      Behavioural control
      Environmental control
      Effort
      Time management
      Cognitive competency
      Comfort in Use of technology
      Task management
      Sensory input control
      Physical location control
      Cognitive load
      Self-talk
      Shutting out competing stimuli
      Task categorization
      Performing longer term outlook
      Social distractions
      Technology distractions
      Multiple task rotation
      Cultural prioritization
      Deliberate practice -towards expertise in specific sequence of skills
      SRL/CRL – Interrelated web-of-skills
      Novice thinking
      Expert thinking
      External locus of control
      Cognitive acuity
      Emotional self- and co-regulation
      Cognitive acuity
      Emotional self- and co-regulation
      External pressure
      Student performance
      Amount of student thinking
      Depth of student thinking
      Conscious focus on learning
      Learning performance
      Self-estimation of performance
      Confidence in self-estimation of performance
      Internal locus of control
      Introspective honesty
      Pursuit of improvement
      Goal setting
      Planning
      Self-esteem
      Ego resiliency
      Stress management
      Cognitive bias

Comments are closed.