Facing the Challenges of Assessing 21st Century Skills in the Newly Emerging Educational Ecosystems

Building a Roadmap to Successful Student Assessment

Hotel Beatriz, Toledo, Spain, 18 September 2015 in conjunction with EC-TEL 2015

The new requirements and opportunities arising from current hybrid and scalable educational ecosystems and the rapidly changing and demanding labor market require an adaptation in existing assessment models, techniques and tools. In this workshop we will discuss new forms of assessment, needs and potential solutions, from the perspective of the different stakeholders involved in the assessment process (teachers, learners, policy makers, potential employers, etc.).

Call for Papers

Assessment is a key part of the learning process, whether used for demonstrating the achievement of certain learning outcomes or for formative purposes. Regarding the latter, numerous studies emphasise the importance of quick and personalised feedback for supporting students learning. Regarding the former, in today changing context, where lifelong learning is fundamental and the trend is for personalised curricula with a wide offer of courses (both in formal and informal settings), the demand for reliable certification of skills is increasing.

Both aspects of assessment pose formidable challenges in nowadays education. One key challenge is the new skills and abilities (cognitive, metacognitive, procedural, emotional, etc.) that the knowledge society demands learners to acquire, and, in consequence, have to be evaluated, either for supporting their acquisition or for guaranteeing their accomplishment. Another big challenge is scale, despite the pedagogical support for personalised learning. A notorious example are MOOCs, which promise education at reach for everyone but dilute the presence of the instructor among a sea of learners. Other emergent learning contexts, such as virtual 3D worlds or augmented reality, may also risk student solitude when used outside a formal class without the support of a teacher. In summary, while providing invaluable and undoubtful educational potential, there is also a disturbing tendency towards an evanescent role of the instructor in the newly emergent learning settings. This affects particularly assessment, which is also impacted by additional constraints such as security and identification issues, etc. Additional challenges are inclusion and accessibility, interoperability, etc.

On the other hand, the rapidly evolving educational landscape is experimenting the search for new business models, the evolution of the traditional leading roles (more autonomous students, teachers no longer just transmitters of knowledge), the increasing prominence of other stakeholders, new agents appear on the scene, etc. New requirements and opportunities, challenges and solutions converge, shaping the need for new educational and, consequently, assessment models, techniques and tools.

Workshop Topics

Topics include but are not limited to:

  • Assessment challenges and opportunities in current and envisioned learning ecosystems
  • Assessment (methods, tools and experiences) in emerging learning models (including, but not limited to: MOOCs, SPOCs, remote labs, serious games, 3D virtual worlds, simulations, etc.)
  • New methods, techniques and tools for supporting and improving assessment (including, but not limited to: learning analytics, gamification, etc.)
  • Assessment of 21st century skills

Submission formats

Contributions must be submitted through EasyChair ( Submissions must use the LNCS template All submissions will be evaluated by at least two members of the Program Committee. All accepted papers (full and short) will be invited to make an oral presentation at the workshop. Accepted papers will be published in CEUR workshop proceedings ( (pending of confirmation).

  • Full papers. 8-10 pages.
  • Short papers. 4-6 pages.

Important Dates

  • 30 April 2015: Open CFP Call for Workshop Contributions open
  • 21 June 2015: Deadline for paper submission
  • 20 July 2015: Notification of acceptance
  • 30 July 2015: Camera ready versions of papers
  • 18 September 2015: Workshop


Workshop Chairs

  • Raquel M. Crespo-García, Universidad Carlos III de Madrid, Spain
  • Carlos Alario-Hoyos, Universidad Carlos III de Madrid, Spain
  • Carlos Delgado Kloos, Universidad Carlos III de Madrid, Spain
  • Armando Fox, UC Berkeley, USA
  • Maren Scheffel, Open Universiteit Nederland, The Netherlands

Programme Committe

  • Linda Castañeda, Universidad de Murcia, Spain
  • Manuel Castro, UNED, Spain
  • Michael Derntl, RWTH Aachen University
  • Yannis Dimitriadis, Universidad de Valladolid, Spain
  • Ed Gehringer, North Carolina State University, USA
  • Davinia Hernández-Leo, Universitat Pompeu Fabra, Spain
  • Alejandra Martínez, Universidad de Valladolid, Spain
  • Mar Pérez-Sanagustín, Pontificia Universidad Católica de Chile
  • Henri Pirkkalainen, University of Jyväskylä, Finland
  • María Jesús Rodríguez Triana, École Polytechnique Fédérale de Lausanne, Switzerland
  • Bernd Simon, Knowledge Markets Consulting, Austria

Preliminary Program

9:15 – 9-30 Welcome and introduction to the workshop
9:30 – 11:10

Presentation of selected papers

  • Armando Fox, David Patterson, Samuel Joseph and Paul McCulloch,

    We describe our experience developing and using a specific category of cloud-based autograder (automatic evaluator of student programming assignments) for software engineering. To establish our position in the landscape, our autograder is fully automatic rather than assisting the instructor in performing manual grading, and test based, in that it exercises student code under controlled conditions rather than relying on static analysis or comparing only the output of student programs against reference output. We include a brief description of the course for which the autograders were built, Engineering Software as a Service, and the rationale for building them in the first place, since we had to surmount some new obstacles related to the scale and delivery mechanism of the course. In three years of using the autograders in conjunction with both a software engineering MOOC and the residential course on which the MOOC is based, they have reliably graded hundreds of thousands of student assignments, and are currently being refactored to make their code more easily extensible and maintainable. We have found cloud-based autograding to be scalable, sandboxable, and reliable, and students value the near-instant feedback and opportunities to resubmit homework assignments more than once. Our open-source, cloud-based, producer-consumer autograder architecture allows custom autograders to be plugged in easily, and while it is currently compatible with the OpenEdX platform, it should be easy to plug into other Learning Management Systems.

  • Shaykhah S. Aldosari and Davide Marocco,

    In recent years the field of alternative haptic interface is expanding and improvements are being made by developing, testing and refining devices along with software to give to users the possibility of interact with three dimensional virtual objects in an intuitive way. Such technology can play a significant role in education, especially as complements of MOOCs delivery, bot for training and assessment purposes. This paper presents a description of an educational haptic system for chemistry experiment simulations and molecular visualization that can complement a virtual delivery system by providing hands-on experiences and practical assessment.

  • Julian Dehne and Ulrike Lucke, “An infrastructure for cross-platform competence-based assessment”
  • María Jesús Rodríguez Triana

    Schools are reorienting their curricula to the development of 21st century skills and competences. These changes entail an adaptation of the assessment methods. Alternatives such as e-portfolios, rubrics, peer and on-line assessment, and Learning Analytics seems to be promising for that purpose. However, the current trends towards hybrid or blended learning (involving face-to-face and distance learning), Computer Supported Collaborative Learning (combining face-to-face and computer-mediated interactions in individual and collaborative activities), and the integration of multiple tools conforming Distributed Learning Environments (DLEs) difficult the collection of evidence throughout the learning process. Thus, despite existing e-assessment tools have proved their usefulness supporting 21st century skills, they often do not fit with the newly emerging educational ecosystems.
    In this presentation, we focus on two skills, namely collaboration and ICT literacy, and how the alignment between Learning Design and Learning Analytics may contribute to address the complexity of blended CSCL scenarios supported by DLEs.

11:10 – 11:40

Coffee break

11:40 – 12:45

Interactive discussion and hands-on workshop about current challenges for assessment and the roadmap to a successful assessment framework that responds to such challenges.

12:45 – 13:00 Final debate and conclusions