Alternative Forced Choice (AFC)
The Alternative Forced Choice feature includes adaptive versions of both the paired comparison test (2-AFC: “Which is best?” “Which is more?”) or a triangle test (3-AFC: “Which is different?”). The method sometimes referred to as the adaptive staircase. Here, the stimuli presented at the next trial depends on the participant choice in the current trial. Compared to the static paired comparison and triangle alternatives, the adaptive versions require fewer trials and stop once a threshold or plateau has been reached, thus taking less time per participant and requiring them to focus for shorter intervals. Our implementation is simple in the sense that all stimuli must be uploaded beforehand, as opposed to being generated or mixed between trials. However, it has an advanced configuration with the possibility of using multiple stages and includes the hybrid weighted up/down convergence method.
Simple video in the browser
This feature enables tests to include both audio and video stimuli. It was designed for testing audio in the context of video or images. Due to the practicalities of downloading video within online tests, only compressed audio formats are supported (e.g. mp4). This feature supports video presentation in split-screen mode shared with the user interface (suitable for online testing) or in a two-screen setup (lab setting) with one screen dedicated to the video content. Crossfading and zoom functionality are limited. The feature supports uploading of video files with audio or audio- and video files separately.
Design of Experiment (DoE)
Design of Experient (DoE) is a feature that allows full control of the presentation order within a test. Currently, most templates have an automated full-factorial block design with randomized presentation order. With custom DoE’s you can make e.g. balanced designs, fractional-factorial designs, include hidden reference stimuli to gauge participant performance etc. In recent years, it is also widely used to give participants slightly different stimuli when collecting data for machine learning. Furthermore, our DoE feature also allows configuration of tests with fewer or more factors than the three in the templates (System, Sample, Condition).
Media per assessor
This feature is designed for tests where the stimuli is individualised, e.g. for hearing aid users with individual fitting. It enables a test to contain a full stimuli-set of every factor combination designated to each individual participant. Having one test instead of multiple tests (one per individual, which is the alternative to this feature) increases efficiency, ensures identical test configuration, and allows a collective statistical analysis. Furthermore, data collection can begin as soon as stimuli for one participant is uploaded (instead of requiring all stimuli to be recorded and uploaded), and it can be previewed with stimuli for any of the invited participants.
Performance adaptation
This feature enables a test to be configured with a pre-determined correct ranking of systems (products, DUTs). Useful for e.g. screening of participants (consumers) or training of assessors. For screening purposes, a screening test placed in a test sequence can stop participants from starting a main test using a gating criterion, e.g. correct ranking of the stimuli in two trials. For training purposes assessors can receive feedback per trial and/or per test, automatically repeat failed trials, or skip the remainder of a test 1) if a sufficient number of trials were passed, or 2) if too many trials were failed.
Scripted tests
The Scripted tests feature adds a special test type, that enables test configuration using python scripting. This is a super user feature that requires significant training and effort to master but enables very advanced test designs. For example, tests with multiple test interfaces in one test, test logic that determines stimuli or test interface in the next trial (e.g. if annoying then background noise intrusiveness), variable test completion criteria (e.g. based on a test duration), variable test instructions, etc. Purchasing the feature includes training of a limited number of administrators. As a separate consultancy service, SenseLab offers to write custom scripts as a shortcut to making your advanced test possible.
Single-Sign-On (SSO)
Single-Sign-On (SSO) is a feature that enables clients to log in using their corporate password for e.g. Windows or Mac, instead of using a separate password. In practice, you log in from a corporate website and is redirected to SenseLabOnline. It requires that you are both a registered admin in our license system and that your IT department has given you access to use your login to access our software. Permission from your IT department can usually be given per person or per department. SSO exists in many variants. We support OpenID Connect and SAML2. Configuring SSO requires some collaboration with the client IT department, for which we charge a one-time fee.
One further advantage of SSO, is that it provides access control to your IT department, such that admins leaving the department or company has their access removed without needing to (remember to) contact us at SenseLab. One disadvantage of SSO is that it might trigger a requirement for a TPA, which we charge a handling fee for. See more below.
Third Party Assessment (TPA)
Third Party Assessment (TPA) is a document that a client company may require us to fill out. It usually contains approximately 200 questions about our IT security. It is normally requested annually. Client companies do not always require TPA’s, but the requirement can be triggered by what is seen as (risky) software integration. For our software, this could be the SSO feature. If we do not fill out a TPA when requested, then the client company might not allow internal use of our software. Filling it out requires input from the development team and both our IT- and IT security departments and is a large task for us to handle. Partly because some questions relate to details we cannot disclose (e.g. where is your backup physically stored?), and our IT security is continuously updated, so we cannot assume that anything is unchanged from the year before. Consequently, we charge a handling fee to fill out a TPA and are not required to do so unless the fee has been paid.
Server upgrade: Regional server
Our main server is hosted internally at our headquarters in Denmark. In some cases, the response time from our server location to the location of participants (or administrators) might be noticeable. This is usually not needed, as SenseLabOnline pre-downloads stimuli for the next trial while evaluation in the current trial takes place, but in some cases, it might be needed. For example: tests with very fast evaluation time, e.g. for speech in noise tests, tests with larger media files (e.g. video) or adaptable tests where stimuli for the next trials aren’t pre-determined. For those use cases, we offer the use of a regional server. For this service, we use Microsoft Azure server centres located across the globe. A list of their locations is available
here. You are free to choose a region. Other reasons for needing a regional server might include legal consequences of where the data is stored or challenges with regional internet traffic surveillance (that affects response times).
Server upgrade: Self-hosted
In rare cases, you might want to host our SenseLabOnline server internally within your company. The main reason being corporate confidentially policies and IT security concerns. This service is not available in all regions. Please contact us for details. While having SenseLabOnline hosted internally may satisfy security concerns, it does have its downsides. Mainly that updates (of software or license) require a larger effort and might happen slower than on our main and our regional servers. Please note, that our software and server configuration is periodically challenged in penetration tests and will soon be governed by European NIS2 regulations.