Automating Educational Research Through Learning Analytics: Data Balancing and Matching Techniques
Part 1 of 6
Part 2 of 6
Part 3 of 6
Part 4 of 6
Part 5 of 6
Part 6 of 6
This tutorial lays the foundation for observational research by proposing to delve into the topics of data balancing and assumptions underlying causality. More specifically, this tutorial thoroughly demonstrates three matching techniques and their corresponding imbalance metrics to minimize and measure data imbalance: Coarsened Exact Matching/L1, Mahalanobis Distance Matching/Average Mahalanobis Imbalance, and Propensity Score Matching/Difference in Means. This tutorial provides interactive and collaborative programming and analysis tasks. In particular, a Web application dashboard is made available to assess the level of data imbalance in two closely related randomized and observational datasets. This tutorial gives insights on the potential of observational studies as learning analytics matures and research on machine learning problems like matching, dimensionality reduction, and optimization progresses.
Boulanger, David Athabasca University, Canada dboulanger@athabascau.ca
David Boulanger is a student and data scientist involved in the learning analytics research group at Athabasca University. His primary research focus is on observational study designs and the application of computational tools and machine learning algorithms in learning analytics including writing analytics.
Kumar, Vivekanandan Suresh Athabasca University, Canada vive@athabascau.ca
Dr. Kumar is a Professor in the School of Computing and Information Systems at Athabasca University, Canada. He holds the Natural Sciences and Engineering Research Council of Canada’s (NSERC) Discovery Grant on Anthropomorphic Pedagogical Agents, funded by the Government of Canada. His research focuses on developing anthropomorphic agents, which mimic and perfect human-like traits to better assist learners in their regulatory tasks. His research includes investigating technology-enhanced erudition methods that employ big data learning analytics, self-regulated learning, co-regulated learning, causal modeling, and machine learning to facilitate deep learning and open research. For more information, visit http://vivek.athabascau.ca.
Fraser, Shawn N. Athabasca University, Canada shawnf@athabascau.ca
Dr. Fraser is an Associate Dean in Teaching & Learning and Associate Professor at Athabasca University, and an Adjunct Assistant Professor in Physical Education and Recreation at the University of Alberta. His research interests include understanding how stress can impact upon rehabilitation success for heart patients. He teaches research methods courses in the Faculty of Health Disciplines and is interested in interdisciplinary approaches to studying and teaching research methods and data analysis.
Monday, 11 June 2018 (afternoon session)
Time | Event |
12:30 PM | Lunch break |
1:30 PM | Theoretical Section – Theories Underlying Causality, Observational Studies, and Learning Analytics |
2:15 PM | Theoretical Section – A Walk-through of Propensity Score Matching and Alternative Matching Techniques |
3:00 PM | Coffee break |
3:30 PM | Hands-on Section – Interactive Causal Analysis on Lalonde’s Randomized and Observational Datasets [app] [video] |
4:00 PM | Hands-on Section – Measuring Data Imbalances in Lalonde Datasets [files] |
4:30 PM | Discussion and Q&A |
5:00 PM | End of session |
Q&A will be allowed throughout the session in addition to a 30-minute period at the end for a general discussion and further Q&A.
Internet connectivity will be required for participants to download the tutorial materials. Participants, who are keen to engage more actively with the interactive analysis and programming activities, are requested to bring a laptop (Windows Vista/7/8/10, Mac OS X, Linux) to install R and RStudio and to use a web browser for accessing instructional materials as well as for running the Shiny web application.
- Install RStudio Desktop Free Edition (requires R 2.11.1+): https://www.rstudio.com/products/rstudio/download2/
- If you do not have R installed on your computer, install the latest version of R: https://cran.rstudio.com/
- Download the tutorial’s code files and datasets.
- Go to the Shiny interactive web application.
- View a demo of the Shiny web application.
Click here to register. Standard rates are running until June 7. Afterwards, rates will be charged as late and on-site registration. This tutorial “Automating Educational Research Through Learning Analytics: Data Balancing and Matching Techniques” is indicated as T1.
This tutorial will be held at the Université du Québec à Montréal (UQAM).
200 Sherbrooke Street West at Pavillon Sherbrooke building
Montréal (Québec) H2X 3P2
Coffee breaks and lunch will take place in the same building (Pavillon Sherbrooke)
at ROOM SH-4800 (a multipurpose room)
For more information, please visit: http://its2018.its-conferences.com/location/conference-venue/.
June 7 – End of standard registration rates for this tutorial.
June 11 – Automating Educational Research Through Learning Analytics: Data Balancing and Matching Techniques (Tutorial)
There is no call for papers.
1. Objectives
The objectives of this tutorial include presenting guidelines on how to conduct causal analyses in observational study settings. It compares the key properties of the gold-standard randomized experiment against the naturally-occurring observational study. It advocates that the randomized experiment is the specific case of the observational study, the general case, where data balance is inherently optimized. This tutorial promotes discussion on the role that learning analytics can play in educational research to enhance causal analysis through the collection of a wider range of digital learning data and the inclusion and participation of a more diverse set of learners, while minimizing bias, introduced by confounding factors, by approximating fine-grained randomized block designs through appropriate matching techniques. In particular, three matching techniques are explored, Coarsened Exact Matching (CEM), Mahalanobis Distance Matching (MDM), and Propensity Score Matching (PSM), along with their corresponding data imbalance metrics, the L1 vector norm, Average Mahalanobis Imbalance (AMI), and the sum of differences in means. The tutorial also offers hands-on activities in which participants are invited to 1) programmatically measure data imbalance among two closely related randomized and observational datasets, and 2) measure and visualize data imbalance through an interactive dashboard, implemented using the Shiny web application framework and the state-of-the-art MatchingFrontier R package and its dependencies.
2. Audience
The tutorial targets educational researchers, data scientists, and teachers. Some background in statistics (e.g., descriptive statistics, probability, analysis of variance) and research methods (e.g., randomized designs, observational studies, factorial treatment structure) is an asset. Participants only need to register to this tutorial session in order to engage in the interactive discussion. Participants are also invited to contact the authors by email (dboulanger@athabascau.ca) to introduce themselves and share their level of expertise as well as their expectations from the tutorial.
3. Outcomes
The learning outcomes of this tutorial include:
- Describing experimental methods and studies in education/learning analytics
- Proposing a valid observational study design using matching
- Comparing different matching techniques: Coarsened Exact Matching, Mahalanobis Distance Matching, and Propensity Score Matching
- Demonstrating the suboptimality of Propensity Score Matching, the most popular matching technique in observational studies
- Measuring the accuracy (in terms of data imbalance) of the proposed design against a randomized experiment
- Performing interactive observational studies using Shiny/R
- Discussing why it is important to have valid designs of observational studies; whether machine learning deals mainly with observational data; what is the real impact of handling properly observational data on learning analytics
4. Tutorial Format
This half-day tutorial includes a presentation of the theories followed by a session of hands-on activities. Discussion among participants and presenter(s) will be ongoing during the tutorial. Participants, in small groups, will discuss key traits of matching methods, imbalance metrics, and key differences between observational studies and randomized experiments. They will have an opportunity to work, individually or in small groups, with hands-on data, tools, and models to perform an observational study using Coarsened Exact Matching, Mahalanobis Distance Matching, or Propensity Score Matching. For those who desire, they will also interact in small groups to respond to different types of research questions using an interactive Shiny web application. Different levels of participation will be offered: 1) listening to the presentation (every step will be shown on slides), 2) a web application will be available for non-programmer participants to run their analyses without any coding activity, and 3) an R script will be available for those who are interested in programming directly some portions of the analyses.
5. Rationale
The gold-standard randomized experiment has encountered significant resistance in educational research in the last few decades due to its inherent discriminatory and intrusive properties, generating ethical issues as to the provision of fair opportunities to all students and thus limiting and compromising the quality of randomization and its incurred benefits (Hannan, 2008; Kent, 2011; Silverman, 2009; Sullivan, 2011). On the other side, observational studies, although well-suited for educational purposes, are often viewed as offering mitigated results. For example, some argue that observational studies overestimate treatment effects (Concato et al., 2000) and that their results are more biased due to the presence of confounding factors (“At Work”, 2016), which is minimized in randomized settings. However, observational studies have also proved to be the only way researchers can explore certain questions and are more affordable in terms of cost and participants when it comes to longer-term longitudinal studies (“At Work”, 2016). In addition, larger sample sizes are possible which may also hold the promise of potentially more powerful experimental results.
Although observational study designs are not yet mature enough to overcome decisively randomized experiments, it has garnered significant amount of interest in past years (King & Nielsen, 2016). However, important challenges persist. For example, to be accurate, observational studies require identifying as many confounding factors as possible to minimize the underlying bias. Nevertheless, increasing the variety of data types collected and blocking on these variables to approximate randomized block designs without investigating their actual individual and combined causal effects (accounting for interaction among variables) on targeted outcomes constitute a serious threat to the validity of observational study designs by increasing further data imbalance. Hence, observational study designs require a holistic approach, where impact of both treatment variables and covariates are simultaneously and iteratively assessed and updated. Traditionally, observational studies account for between 5 and 50 confounding factors, nevertheless requiring the number of collected observations to be more numerous than the number of covariates (Roberts et al., 2015). However, techniques have recently been explored to accommodate situations where covariates are significantly more abundant than the number of observations collectable, by leveraging adapted dimensionality reduction techniques (Roberts et al., 2015).
Propensity Score Matching became one of the favorite observational methods to investigate naturally occurring data (King & Nielsen, 2016). However, King et al. underscore major weaknesses of PSM such as the data imbalance (PSM Paradox) created by dimensionality reduction and compare alternative approaches like Coarsened Exact Matching (Iacus et al., 2012) and Mahalanobis Distance Matching (Ho et al., 2007; King & Nielsen, 2016). King and colleagues advocate that matching techniques may prove to be effective in some scenarios and suboptimal in others, and that several types of matching methods should be tested, including hybrid versions. Several optimization (loss) functions then need to be calculated to measure the level of data imbalance in matched control and treatment groups, such as L1, AMI, and the average difference in means. King and colleagues developed an R software package to facilitate the assessment and selection of optimal matching methods, called MatchingFrontier, by means of visualizations (King et al., 2014). Hence, this workshop introduces MatchingFrontier and provides directions for further research to create statistical algorithms that will allow the machine to select automatically optimal matching methods.
Clearly, matching, dimensionality reduction, and optimization (e.g., minimizing data imbalance) are machine learning problems, also central to causal observational studies. From a theoretical and mathematical viewpoint, the observational study is the general case of the randomized experiment, suggesting that as technology will evolve, the power of observational studies will also evolve. The design and development of a scalable, reliable, and valid observational study constitute a very important research area in learning analytics, learning analytics being mainly concerned with the formation of predictive and context-aggregate (average treatment effect) and context-specific (individual treatment effect) causal models in order to foresee educational outcomes and prescribe recommendations to remedy suboptimal learning processes (Boyer & Bonnin, 2016). In other words, observational studies are concerned with the individuality of the learner and learning analytics embodies the ability to adapt pedagogy to the unique personality of the learner.
6. Prior Experience
Our team has previously presented a similar workshop, entitled “Matching Techniques: Hands-on Approach to Measuring and Modeling Educational Data (Tutorial),” at the 2017 International Conference on Artificial Intelligence in Education (AIED) and will be conducting another related workshop, “Open Research and Observational Study for 21st Century Learning,” at the upcoming 2018 International Conference on Smart Learning Environments (ICSLE).
References
- At Work, Issue 83, Winter 2016: Institute for Work & Health, Toronto.
- Boyer, A., & Bonnin, G. (2016). Higher education and the revolution of learning analytics. International Council for Open and Distance Education (ICDE).
- Concato, J., Shah, N., & Horwitz, R. I. (2000). Randomized, controlled trials, observational studies, and the hierarchy of research designs. The New England Journal of Medicine, 342(25), 1887–1892.
- Dehejia, R. H., & Wahba, S. (1999). Causal effects in nonexperimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association, 94(448), 1053–1062.
- Dehejia, R. H., & Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. Review of Economics and Statistics, 84(1), 151–161.
- Hannan, E. L. (2008). Randomized clinical trials and observational studies: Guidelines for assessing respective strengths and limitations. JACC: Cardiovascular Interventions, 1(3), 211–217. http://dx.doi.org/10.1016/j.jcin.2008.01.008
- Ho, D., Imai, K., King, G., & Stuart, E. (2007). Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis, 15, 199-236.
- Iacus, S. M., King, G., Porro, G., & Katz, J. N. (2012). Causal inference without balance checking: Coarsened exact matching. Political Analysis, 1–24.
- King, G., Lucas, C., & Nielsen, R. A. (2014). The balance-sample size frontier in matching methods for causal inference. American Journal of Political Science.
- King, G., & Nielsen, R. (2016). Why propensity score should not be used for matching, (617).
- LaLonde, R. J. (1986). Evaluating the econometric evaluations of training programs with experimental data. The American Economic Review, 604–620.
- Medical Publishing Internet, Kent W. (2011). The advantages and disadvantages of observational and randomised controlled trials in evaluating new interventions in medicine. Educational article [Internet]. Version 1. Clinical Sciences. Retrieved September 19, 2017, from https://clinicalsciences.wordpress.com/article/the-advantages-and-disadvantages-of-1blm6ty1i8a7z-8/
- Roberts, M. E., Stewart, B. M., & Nielsen, R. (2015). Matching methods for high-dimensional data with applications to text.
- Silverman, S. L. (2009). From randomized controlled trials to observational studies. The American Journal of Medicine, 122(2), 114–120. http://dx.doi.org/10.1016/j.amjmed.2008.09.030
- Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education, 3(3), 285–289. http://doi.org/10.4300/JGME-D-11-00147.1