The Question and Test Interoperability specification was first released as a draft version 1.0 specification in December 1999. Entering its twentieth year, QTI will see a new version 3.0 released in 2019. Join the QTI Project Group leadership as we review the soon to be released v3.0 specification, explore the new features and functionality it offers and learn how you can benefit as a vendor/publisher, assessment administrator, or other testing stakeholder. We will also detail the improved and more robust profile-based conformance certification process and discuss methods of migrating existing QTI content and systems. Join us for this opening panel to understand QTI v3 from the leaders of QTI.
Feedback can occur as a powerful stream of potentially useful information to improve student engagement and learning experience. But that flow is often both unidirectional and 'unfinished'. Without technology, it can also be time-consuming and tiring process. This presentation describes a few case studies of using a social feedback technology in classrooms because we were interested in closing the feedback loop. We recognize that, for many students, this can often only be fully achieved by facilitating confidential feedback. We recognize the importance of bidirectional 'dynamic feedback', in which students and instructors respond to each other in constructive, ongoing, iterative 'conversations'. The prize of using dynamic feedback technology is an enhanced learning environment.
New standards are requiring states to consider changes in assessment design and item types in order to efficiently and effectively assess students. States testing 100% online are able to leverage computer adaptive testing (CAT) and require development of a deep pool of test items for best functionality. Item types such as digital simulations use complex activities to provide multidimensional evidence of student understanding of both science content and practices. Both simulations and CAT, however, are expensive to produce, which makes collaboration between states a good strategy to increase quality and decrease cost. This presentation provides an overview of the emerging trends impacting statewide testing, the success of which depends, more than ever, on vendors becoming certified in the Question and Test Interoperability (QTI) standard.
Program Manager Online Assessment, Maryland Department of Education
Dale Cornelius is the chair of IMS Global’s State Assessment Innovation Leadership Network. Previously, as chair of the PARCC Assessment Technology Group, he helped develop the first large-scale assessment designed to be delivered 100% online and accessible to all students... Read More →
Our session will share the results of a study we ran at UNC-Chapel Hill. The study introduced automated assessment technology alongside traditional and peer-to-peer assessment models in a large introductory Philosophy course. Our goal was to measure which approach had the strongest impact on improving student writing outcomes. We will share our data to discuss the extent to which technology outperformed the other models. This, in turn, will provide the basis for outlining how automated assessment technology can overcome the limitations of both the traditional and peer-to-peer models.
What are test publishers of college entrance exams doing in the way of innovation? Many state assessment programs have been leveraging technology to go beyond multiple choice for a number of years, creating assessments that are not only more engaging but that can assess a deeper level of understanding as required by new standards. From the perspective of test development and deployment, how are test publishers leveraging technology to move college entrance exams into the 21st Century?
Program Manager Online Assessment, Maryland Department of Education
Dale Cornelius is the chair of IMS Global’s State Assessment Innovation Leadership Network. Previously, as chair of the PARCC Assessment Technology Group, he helped develop the first large-scale assessment designed to be delivered 100% online and accessible to all students... Read More →
Smarter Balanced has been an IMS Contributing Member for many years supporting efforts to create open technical interoperability standards that support accessible assessments. In this session we will discuss the use cases driving Smarter Balanced's current work, the evolving assessment landscape, and what role QTI v3 plays in the next iteration of the Smarter Balanced online assessment system.
Many of our traditional assessment methods will not be sufficient to solve the challenges put forth by new standards that are multidimensional or address skills not easily addressed by traditional assessment items. Rather, complex activities including simulations will be required to produce the evidence of integration of knowledge of a content domain and application of practices. Digital simulations can help us provide scenarios in which we can better observe students' ability to conduct scientific inquiry alongside content knowledge. This session will describe work underway in Minnesota to explore new opportunities for both formative and summative simulation-based assessment tools through collecting activity stream data. This data provides information about the processes students use in assessment tasks as evidence of student learning but questions remain about how to capture and aggregate this evidence into interpretable scores. It will cover administrative, authoring, and technical challenges encountered in this work.
Vice President, Learning Research and Design, Pearson
Kristen is the Vice-President of Education Research at Pearson, working to integrate learning science research into digital products. Her personal research program centers around game-based assessment, specifically the collaborative design of games as both learning and assessment... Read More →
Join fellow state departments/ministries of education, school districts, and higher ed institutions for an open and active discussion on digital assessment. Attendees will be expected to share their successes and challenges and contribute practical ideas that may help inform future assessment work in IMS Global.
This session will provide a (technical) overview and demos of the new standard on CAT (Computer Adaptive Testing). This specification allows easy integration of adaptive modules (demonstrated by ACT and Cito) into test delivery platforms (demonstrated by OAT and Cito). Additionally, we will provide an update on the status of the specification, available documentation, conformance testing process and plans for future versions. In 2016, IMS formed a workgroup of industry leaders to design a standard on CAT: a set of best practices to extend QTI and an API for adaptive engines. The general consensus was not to define one generic sequencing engine or an elaborate standard which defines all possible variations in CAT, but to treat an adaptive engine as a "black box" and define a common language to communicate with these engines. This way end-users are assured much needed interoperability, while vendors can continue to innovate.
Many states and school districts are unaware of QTI and the many benefits that stem from full adoption of the standard. At the state level, the benefits of QTI from an accessibilty standpoint are clear in that it enables all students to test online. At the district level, however, schools are not required to assess all students using the same platform. This is one key difference between district and state assessment systems. Nevertheless, from an interoperability standpoint, the benefits of QTI adoption the district level, like the state level, are clear in that it can enable districts to share test items between platforms: increasing quality while decreasing cost. In this open discussion, vendors, states, and districts will brainstorm ideas on how best to engage more districts and states in adopting QTI. What are the roadblocks and where are the opportunities? We will also hear from states and districts who are already engaged and can share best practices.
In this session we will review case studies related to the successful delivery of accessible assessments and engage the audience to understand similar efforts underway around the testing world and what obstacles stand in the way of making further progress and providing assessments that accommodate the needs and preferences of all students.
No more tests? Recent advances in technology and data modeling provide actionable evidence that can help students reach their potential. At the core is redesigning assessment systems to be integrated with learning experiences. Join a diverse panel of experts working in research, higher education, and edtech to discuss how edtech, authentic assessment, and data-driven improvement can shape the future of learning. We'll go over case-studies and real student data to show the impacts of different assessment styles have on learning and student engagement, both short-term and over an entire college career.