This cloud-based model is trained to help you optimise the sound of your audio product. It characterises the sound and guides you on how to improve the tuning.
Perceptual characterisation of your audio products 24/7
The Virtual Listener Panel™ (VLP) is a cloud-based application you can access when you, as an audio product developer, need perceptual characterisation of your product. The model is trained on large sets of high-quality data collected with SenseLab's Expert listeners and delivers highly reliable perceptual evaluations with great ease and speed.
Effortless perceptual evaluation for audio product development
Perceptual evaluation should be a natural part of product development. With VLP, you simply upload the recordings, or simulations of your products, analyse them and explore the perceptual differences as input to your iterative product optimisation process.
Contact SenseLab
Do you have a question, or want to discuss a project?
The VLP service is offered at three different subscription levels, starting with the Light version at 495 EUR per year (including 100 credits/month) up to Enterprise at 3,495 EUR per year, including 1000 credits/month.
The subscription covers access to the VLP platform, including the latest version of the model. The Premium subscription always allows you access to the newest model.
Every new analysis in VLP will be charged a certain amount of credits to run. You are informed how many credits an analysis requires before you accept or decline it to run. Your personal account has a counter on how many credits you have left. You can purchase more credits at any time.
The Virtual Listener Panel (VLP) is a machine learning-powered tool developed by SenseLab to evaluate and quantify the perceptual quality of audio products. It enables manufacturers to compare sound characteristics across different systems in minutes, as opposed to days or weeks when using traditional human expert panels. For more details click here.
VLP can analyse perceptual differences based on attributes in the timbre domain. The model is trained to predict perceptual differences in music or noise playback.
The VLP uses a Siamese-inspired neural network model based on no-reference ratings, ensuring precise evaluations without the need for direct comparison to a reference file.
The VLP is trained on large data sets of paired comparisons collected with SenseLab's trained expert listener panel. Variations of sound samples were made specifically for each attribute and convolved with a range of real measured headphones and earbuds.
The VLP allows manufacturers to make perceptually informed decisions faster and more efficiently. By automating perceptual evaluation, it reduces the time and costs typically involved in testing with human expert panels.
The VLP uses a large dataset collected from SenseLab's expert panel, which includes around 54,000 ratings on music genre samples and pink noise. This data is transformed into 150,000 pairwise ratings for model training and validation.
The VLP model is trained using a Siamese-inspired neural network architecture. It has been validated to ensure its ratings align closely with real systems assessed by human experts.
Yes, the VLP is designed to be a scalable and easily integrable framework, ensuring that perceptually informed decisions are made from the very beginning of product development.
The VLP leverages a machine-learning approach based on no-reference ratings. It also employs extensive audio features, such as spectrum type and loudness normalisation, to enhance its testing precision.