Since December 2020, the new Test Tool has been gradually released to users. This page contains release notes for 2023. (Please read the overview page to read more about the new test tool: New Test Tool Overview)
TABLE OF CONTENTS
TABLE OF CONTENTS
- Test v 1.30 (September)
- Test v 1.29 (August)
- Test v1.28 (June)
- Test v1.27 (May)
- Test v1.26 (April)
- Test v1.25.1 (March)
- Test v.1.25 (March)
- Test v.1.24 (January)
You can find previous release notes here:
Test v 1.30 (September)
Technical release, with the addition of a few minor bugfixes.
Test v 1.29 (August)
This release is mostly technical, with updates to the editor to ensure it is consistent with the rest of itslearning. The editor changes also fixes a recent issue with the introduction text in question groups.
QTI import update
The QTI import is extended to handle images and equations in multiple choice answer alternatives. The change has been made mainly to ensure that content can be migrated from the old test tool with minimal loss.
- Support for .jfif files in Select point
- Accessibility fixes: markup errors, colour contrast, label and button texts
- Review didn't work in library
- Duplicate names of question groups are no longer allowed
- Number of questions is correct also when "Random select" is used in question groups to use only a subset of the questions
Test v1.28 (June)
Categories are replaced with 'Question groups'
We are proud to finally present a more modern version of the test tool 'Categories'. The new version has been in pilot for a few months, and after some tweaks and bugfixes we are now ready to roll this out to everyone.
This is a rather advanced feature in our test tool, and can be used in several ways. We have modernised the entire feature, and removed some previous hiccups and issues. With a new look and feel we also renamed categories to 'Question groups' to make it easier to understand. In addition there are several feature enhancements. The change from categories to question groups will retain all exisiting data and functionality.
When you create a new test and go to the 'Question groups' tab, you will find:
- New overview page
- New page for adding/editing a group
- Improved language to make the feature easier to understand
- Improved logic for random selection of questions
- Option to show one group on a separate page
- Option to have an introduction text displayed with each group, to set a context for the questions
One thing worth noticing is that if you had previously set "Questions to draw" = 0, you will now have "Randomly select questions" = disabled, and all questions in the group are used. This is how it actually worked previously, although badly displayed. It is no longer possible to set this to 0.
Another change in logic is that we previously locked question order to random when you had set up the test to randomly select questions from at least one category. That restriction is now lifted, and the teacher can freely select normal order (questions are presented in the order from the question list) or random order (mixing all questions for each attempt).
Please refer to the full documentation for details: Test question groups (formerly categories)
Advanced question analysis
The rollout of this feature got delayed due to some issues we found just before the previous release. These are now fixed and the feature (as described in release 1.27) will be rolled out as part of this release instead.
- Accessibility improvements: adding labels and making sure colour is not the only way to convey meaning in the attempts list
- Tests can be completed using keyboard in Chrome
- Wide images will be resized on smaller screens to avoid them being cut off
- Reviewing a test as a teacher using Testmode Browser now works as expected again
Test v1.27 (May)
Advanced question analysis
NOTE: This will not be enabled for all until 1-2 weeks AFTER the release.
For high stakes tests or exams, in particular in higher education, teachers want to analyse the quality of their questions. There are established methods for this, looking at the difficulty and discrimination of each question based on the student answers. This feature has been in pilot for a couple of months, and we are happy to finally be able to offer this to everyone. The option is found in the Results tab, close to the summary information for the test. The analysis is performed on all counting attempts that are fully assessed (max one per student).
Clicking the button will start the calculations, and create an Excel file (.xlsx). You will see an information banner telling you that the analysis is ongoing, and when it is done, there is a download link. The link is available for 24 hours.
If students later submit new attempts that will be counting, or you assess more attempts, simply click the button again to fetch an updated analysis.
Detailed documentation can be found in our support notes: Advanced question analysis in tests
Test mode browser update
Updated version of TMB in the download link for Windows (2.1.0.05) and Mac OS (22.214.171.124). In addition to general security improvements, this update fixes an issue with Mac computers with the M1 chip. After the release, older versions are still accepted, but students are recommended to download and install the new version to take a test (download button is found on the test start page). There is also a new download file for the Windows LAB version (available since March 2023), administrators can reach out to support if they are not already contacted about this. Chromebooks and iPads are not affected.
The old test tool will get an updated link in R140. Note that you do not have to download TMB separately for the two test tools, the same Test mode browser is used for both and the link points to the same download.
Note: this update does NOT increase the minimum version, so updating is voluntary for now. We will increase the minimum version later this summer.
- Assessment scales can be changed even after someone has started the test
- Language selector added to the full editor (similar to the rest of itslearning)
- An unfortunate bug lead to errors in the timestamp for submission (was set to time of review). This is now fixed.
- Recalculation of scores when customising the score in multiple choice didn't work
- Some updates to behaviour of anonymous tests
Test v1.26 (April)
This release does not contain any new features, but several bugfixes in addition to performance and technical improvements.
For years, we have had a technical issue resulting in some dialogs appearing out of view in certain scenarios. After several attempts to fix this in an elegant way, we have now reverted to the native browser dialog. This usually appears at the top of the window and will often have a quite technical-sounding heading. The browser dialog is now used in the following scenarios:
- Confirmation when deleting a question from the list
- Confirmation when deleting an attempt from the list of submissions
- Confirmation when submitting an attempt
- Negative score should not be available when manual assessment is used in a question
- Several accessibility improvments, including making the start button in Chrome work with keyboard
- When using random selection of questions (draw in categories) we used to lock the question order. This can now be freely changed. We also fixed an issue where the question navigation got locked as well.
Note: We've had another two minor patches in March (v1.25.2 and 1.25.3), with minor bugfixes and performance improvements.
Test v1.25.1 (March)
We found some major bugs after the previous release, so this is a minor release just with bugfixes.
- Double clicking the submit button gives error
- Negative score setting in multiple choice/response is disabled after saving question
- Total maximum score for test is wrong if questions with max score >1 are added to a category (or question group)
Test v.1.25 (March)
Over the past 2 months we have had a series of small patches to fix some accessibility issues and other bugs.
In this release we launched a pilot with the redesign of Categories, now called Question groups. This is currently only available to a small number of pilot customers, and we will collect feedback and follow-up on any issues before rolling out to all customers later this year, aiming for June.
For those interested, the redesigned feature is documented here: Test question groups (formerly categories)
- Expand/collapse all is available in the new assessment page
- If you have set categories to draw a random set of questions, you are now allowed to change the question order. Random means all questions are mixed. Normal means the order from the question list is used.
- Audio/video in Open answer responses is now displayed correctly when assessing tests
- Several accessibility improvements
- "Reviewed" time will now reflect time of submission for tests that are automatically assessed
Test v.1.24 (January)
We are now introducing an option for negative score which will apply to questions where the students can give more than one answer:
- Multiple response
- Fill in the blank
- Select from a list
- Select point
The option is found near the score setting, where the teacher can activate negative score for the question. Contrary to the old test tool, this option is now per question, not for the test as a whole, giving teachers more flexibility of when to use it. When the option is selected, any wrong answers that are checked, will result in a deduction in the score for that question. Note that the total score for a question will never be below 0.
The option can be changed after someone started the test (this triggers a recalculation for any already submitted attempts).
Please be aware that you might see an unusual pattern in the distribution of the score. As an example: You have created a Fill in the blank question with 5 blank fields. The question score is 5, meaning each blank field is worth 1 point. Without negative score enabled, the student will not be punished for any wrong answers, but simply receive 0 points for them. With negative score, the student will be deducted 1 point for each wrong or blank answer. In this case, students can never achieve 2 or 4 points.
Number of correct answers
Number of wrong or blank answers
|2, 1 or 0||3, 4 or 5||More wrong than correct => 0|
Note: After discussions we will change the logic so blank answers are not punished. This applies to Fill in the blank, Select from a list and Select point question types. The change will be included in the next release. Sorry for the inconvenience.
Automatic feedback on answer alternatives
Teachers can now add automatic feedback for each individual answer alternative in multiple choice/response questions. This means that the feedback will be displayed for the student for each alternative they have selected, together with results. As long as the students are allowed to see questions and their own answers, they will also be able to see the feedback. This does NOT rely on correct answers being displayed.
Automatic feedback on answer alternatives is not included in copy or import yet. It is also not included in the QTI export from the old test tool. We aim to look more into this later.