Innovation in assessment is needed as the world of work continues to evolve. Teams work project-based, work and organizations change faster, and candidates are better prepared for assessments than ever.
Many existing assessments were developed at a time when work was more stable and predictable. In the Starcheck Assessment Lab, we therefore develop new assessment concepts in which psychology, data and technology come together. Here, innovations such as Team-composer, which predicts team dynamics, and Reveal, which measures implicit personality without relying on self-reporting, are born.
The Starcheck Assessment Lab is Starcheck’s innovation and testing environment. Here, we continuously develop and test new ways of psychological measurement and translate them into applications that can be used directly in practice. Important innovations are the assessment products Team-composer and Reveal.
The world of work is fundamentally changing. This is due to an accumulation of changes: the rapid rise of AI, the need for agile organizations with little hierarchy and silo formation, and an increasingly international labor market. These changes require, in part, the use of different psychological constructs and concepts.
The predictive power of an assessment has tremendous value: talent decisions can be made with more evidence, the recovery time and costs of wrong talent decisions decrease, and the organization can move forward more quickly.
For assessment to continue adding predictive value, we cannot continue to rely on proven tools and concepts from the past. Innovation in assessment is needed to discover future assessment solutions that fit this new world of work.
Distinctive performance increasingly arises in interaction. Teams, rather than individuals, form the unit in which results are realized. This shifts the focus from “is someone qualified?” to “how does someone function in combination with others and within this context?” When organizations do not understand team dynamics, learning is delayed.
AI tools allow candidates to prepare, optimize, and match their answers to expected outcomes. Particularly with self-descriptive questionnaires, such as personality and drive questionnaires, it becomes more difficult to distinguish whether answers reflect behavior or result from preparation and strategic response behavior.
Organizations increasingly operate internationally. As a result, behavior takes on meaning within different cultural frameworks. Differences in self-presentation, communication, and norms influence how people react in assessments and how outcomes should be interpreted. Without context, outcomes can easily be misread.
The Assessment Lab is Starcheck’s development and testing environment.
Here, we continuously develop and test new ways of psychological measurement and translate them into applications that can be used directly in practice. Within the Assessment Lab, we bring together three activities:
Candidates are increasingly assessed by a score from an assessment tool, even when it is not always clear exactly what that score means.
The use of AI in assessment has increased dramatically in recent years. Many solutions recognize patterns in data and translate them into a prediction. That can be valuable; the quality of such a prediction can be related to performance.
We see two ways to develop assessment tools.
At Starcheck, we deliberately take this second approach. We develop instruments and expert systems in which assessment data is interpreted using psychological knowledge and predetermined logic.
Our choice of psychometrically constructed instruments becomes more important as you seek to explain, improve, or apply decisions in other contexts.
With the advent of the EU AI Act, this importance increases further. Selection systems are seen as high-risk applications. Organizations must be able to demonstrate how decisions are made and what factors are involved.
What you cannot substantiate thus becomes increasingly difficult to justify. In the end, it’s not just about forecasting.
It is about understanding what you are measuring and being able to substantiate that.
Two innovations illustrate how innovation in assessment works in practice: Team Composer and Reveal.
Team-composer is an assessment solution that provides predictive insight into team dynamics and collective intelligence.
Reveal is an implicit personality assessment that measures behavioral tendencies through reaction time-based tests.
Together they show how new measurement methods can make both team dynamics and personality more visible.
Team-composer is an assessment platform designed to provide insight into teams as systems. It focuses on collective intelligence and examines how combinations of individuals are likely to function together in relation to a defined task or assignment.
Rather than evaluating or ranking individuals, Team-composer provides structured insight into interaction patterns, complementarity, and potential vulnerabilities at the team level. The platform emphasizes transparency, privacy, and psychological safety, supporting constructive dialogue and informed decision-making.
Reveal is an implicit personality assessment. It measures non-conscious behavioral tendencies using reaction-time data, grounded in validated psychological assessment principles. Unlike traditional personality questionnaires, this method does not rely on conscious self-assessment. The results are less sensitive to culturally shaped response styles, such as modesty bias, harmony-oriented behavior, socially desirable answering, or effects of impression management.
Two variants are available:
Results are presented as behavioral orientations rather than fixed traits, which supports reflective and developmental use.
Discover Team-composer. Allows you to estimate the likelihood of a successful collaboration even before the team has begun its task.
Learn more about the power of the collective. An E-book on building teams and organizing Collective Intelligence.
Join the discussion on Minds-United’s LinkedIn group or listen to our podcast on team building and case studies from various guest speakers.
This fact sheet provides an overview of the most commonly used (psychological) selection methods, both classical and modern. The figures are based on meta-analyses and dominant scientific literature.
| Method | Predictive validity (r) | Typical reliability |
|---|---|---|
| Cognitive ability (GMA test) | .51 | High (.85-.95) |
| Work test | .54 | High (inter-rater ≥.70) |
| Structured interview | .51 | Medium-high (.60-.75) |
| Unstructured interview | .18-.38 | Low-medium (.40-.55) |
| Integrity test | .41 | High (α ≥.80) |
| Conscientiousness (Big Five) | .31 | Medium-high (α ~.75-.85) |
| Job knowledge test | .48 | High (≥.80) |
| Years of service | .18 | Not applicable |
| Video/asynchronous interview (incl. AI) | .30-.40 | Good at structuring; algorithmically variable |
| Machine learning / algorithmic models | .20-.50 | Depends on dataset; generalizability limited |
| Serious games / game-based work samples | .35-.50 | High on objective metrics |
| Social media screening | .00-.20 | Low and variable |
Call directly:
+31 88 277 377 6