SCALE

SCALE

Researchers: David Boulanger, Jeremie Seanosky

SCALE stands for Smart Competence Analytics on LEarning.

Introduction

SCALE is a smart analytics technology that transforms learning traces into standardized measurements of competences.

SCALE Architecture PDF

SCALE PDF button

SCALE Architecture Demo

SCALE play button

SCALE Smart Processing Manager PDF

SCALE PDF button

SCALE Smart Processing Manager Demo

SCALE play button

Further description

SCALE is a smart competence analytics technology that analyzes your learning experiences in different learning areas. SCALE basically transforms your learning traces into measurements that will help you assess how proficient you are in the concepts introduced in your course. SCALE will also allow you to evaluate how confident you are at solving a particular exercise and how confident you are in the overall learning domain. SCALE’s mission is to provide you with a scale that will help you measure and optimize your learning as it occurs.

SCALE has been redesigned to become completely independent of the client-side CODEX and vice versa. Prior to that, the client-side sensor was communicating directly with the server-side SCALE processor to transfer data instances, but this had adverse consequences when we consider potential denial-of-service attacks and loss of service for other clients.

It was also impossible for the SCALE engine to handle tremendous amounts of data packets flowing continuously towards the server.

We, therefore, added one more layer between CODEX (client-side) and SCALE (server-side). This new layer is a NoSQL OrientDB database called the Transit Database.

In sequence, CODEX continuously sends data instances it captures at fixed intervals (e.g. 30 seconds) to the TransitDB via a Socket server. The Socket server handles huge quantities of data rather easily in comparison to the HTTP request approach previously used.

Data received by the Socket from CODEX are all relayed to the TransitDB to be stored and accumulated. This is the ONLY connection between CODEX and SCALE, therefore removing any dependency between CODEX and SCALE.

In simpler terms, CODEX does not have any direct link to SCALE, and vice versa.

SCALE, on its side, operates autonomously and independently. Based on its internal design, SCALE continuously looks into the TransitDB to see if there are data available from CODEX. If so, SCALE takes one CODEX data packet at a time, processes and analyzes it, and then marks that data packet as “processed” so the SCALE engine won’t process it again.

Upon completing the processing and analysis on a given data packet, SCALE stores the analysis results in a MySQL database, ready to be used by the visualization and reporting tools available, such as MI-DASH.

 

6 thoughts on “SCALE

  1. EIDEE (which stands for Eclipse Integrated Development Environment Extension) is an extension of the overall SCALE system (Smart Causal Analytics on Learning). SCALE, through the means of EIDEE, collects learning traces in a Java programming domain and is currently experimented in an introductory Java programming course at Athabasca University. “Hackystat is an open source framework for collection, analysis, visualization, interpretation, annotation, and dissemination of software development process and product data (https://code.google.com/p/hackystat/).” Hence, Hackystat provides a plug-in (or sensor) which collects data about the coding activities of students. Since Hackystat is open source, we customized the Eclipse Hackystat sensor for our own purpose in order to capture the data types that will enable us to build a competence portfolio for every student.

    Basically, EIDEE collects the following data types: edit, build, and debug. The edit data type tracks activities such as opening, modifying, saving, and closing a file within Eclipse. The sensor reports the number of characters, statements, and methods of a class when it detects it has been modified. The build data type captures the errors that have been generated when a build failed. Finally, the debug data type enables us to recognize when a student starts and ends a debugging session, when and where he/she sets breakpoints, and which code blocks he/she steps over or steps into. In summary, the sensor knows when a student starts writing or modifying code, collects the errors generated from building student’s code, and finally shows how the student managed to debug or solve his/her errors. The list of all data types can be found at http://code.google.com/p/hackystat-sensor-eclipse/wiki/UserGuide. Besides, we have customized the Eclipse Hackystat sensor in order to capture the student’s source code every few seconds.

    Every assignment in the Java programming course is marked according to the following five rubrics: functionality, testing, debugging, documentation, and regulation. Our goal is to collect enough data for every rubric so that the system can measure the competences of the student in the concepts related to the completed assignment. For that purpose, we continuously track source code and strive to rebuild the project environment on a server for every few source code capture to see the progression of errors and the type of those errors. We may then analyze which errors are the most frequently made by students and if they succeed or not in solving them. At this point, we use the Eclipse JDT compiler to build the abstract syntax tree of every source code capture and store the AST into a source code ontology. We also track the errors output by the compiler into a bug ontology. Eclipse JDT breaks down source code into approximately 84 programming constructs and reports approximately 560 different error types. We then propose to infer competences (which can now be limited at analyzing the exposure of a student to a given concept) through the results of those compilations by the means of a rule-based engine such as JESS and BaseVISor.

    Finally, we provide students with a dashboard (MI-DASH) so that they can view their performance in the course assignments and programming exercises. We also aim at embedding a set of tools within the dashboard to enable students to self- and co-regulate their learning and share their experience with other stakeholders in their learning process (tutors, peers, parents, etc.). MI-DASH gives to the student control over his/her learning process and provides the means to set goals and strategies to improve their coding competences.

  2. For now, Codex will be displayed as a separate project under the Research Projects tab though it is a component of SCALE. For more information on Codex, please visit the Codex project page when it’s available.

Leave a Reply