Test Specifications
Define the key parameters of the assessment you are planning to develop. These specifications will appear in your final report.
Customize Category Weights
Assign a percentage weight to each category based on your organization's priorities. Weights must total 100%.
Psychometric Planning
Theoretical foundation, measurement design, and scientific rigour
%
Equity & Legal Compliance
Fairness by design, bias prevention, and legal framework alignment
%
Strategic & Operational Viability
Alignment with business goals, resource feasibility and stakeholder buy-in
%
✓ Total: 100% — Weights are valid
Weight: 40%
Question 1
Have the target construct(s) been explicitly defined, and is there a clear rationale for why the chosen assessment format will accurately measure them for this role and level (e.g. using a situational judgement test to measure decision-making in a supervisory role)?
Question 2
Is there a plan to conduct a job analysis or competency review to ensure the content of the assessment is directly linked to the requirements of the role?
Question 3
How well defined is your plan for establishing reliability — that is, ensuring the assessment produces consistent results across candidates, raters, and administrations? (1 = No plan, 5 = Fully defined)
Question 4
Is there a plan to standardize administration conditions, instructions, and scoring criteria so that all candidates are assessed under the same conditions?
Question 5
How well developed is your approach to scoring — including clear rating scales, behavioural anchors, or marking keys that will allow consistent interpretation of responses? (1 = Undeveloped, 5 = Fully developed)
Question 6
Will qualified psychologists or assessment specialists be involved in the design, review, or validation of the assessment?
Question 7
How clearly have you defined how results will be interpreted and communicated — including what a high or low score means and how it will inform decisions? (1 = Not defined, 5 = Clearly defined)
Question 8
Is there a plan to pilot the assessment with a representative sample before full roll-out — and separately, a plan to periodically review and revalidate it when the role, context, or candidate pool changes significantly?
Question 9
Is there a plan to gather validity evidence — such as content, criterion-related, or construct validity data — to support the intended interpretation and use of scores from this assessment? (1 = No plan, 5 = Fully planned)
Question 10
Is there a plan to establish cut scores or benchmarks for decision-making, and is the method for doing so (e.g. criterion-referenced, norm-referenced) documented and defensible?
Weight: 35%
Question 1
Is there a plan to involve members of diverse groups or qualified assessment specialists in the development and review of assessment content for fairness?
Reminder: A formal fairness review examines whether assessment content, language, and format may disadvantage candidates from particular groups — including racial, cultural, gender, linguistic, or disability-related backgrounds. It should be conducted by qualified reviewers before the assessment is used with candidates.
Question 2
Is there a plan to review assessment content for potential sources of bias during development, and to monitor for adverse impact across protected groups after the assessment is in use? (1 = No plan, 5 = Fully planned)
Question 3
Is there a plan to provide accommodations or alternate versions of the assessment for candidates with disabilities or special requirements?
Question 4
Will written documentation be prepared explaining why this assessment is relevant, appropriate, and job-related — sufficient to withstand legal scrutiny if challenged?
Question 5
How well developed is your plan for record-keeping and audit trail management — including documentation of decisions, scoring, and candidate communications?
1 = No plan, 5 = Fully planned
1 = No plan, 5 = Fully planned
Question 6
Has the assessment design been reviewed against relevant legislation and policies (e.g. Accessibility, Privacy, Employment Equity, Human Rights)?
Question 7
Is there a plan to provide candidates with meaningful feedback or information about their results, and to ensure that any score reports communicate findings accurately and without misleading interpretation?
Question 8
How confident are you that the assessment, as currently designed, would withstand a legal challenge or formal complaint? (1 = Not at all confident, 5 = Very confident)
Consider: Is job-relatedness documented? Are scoring decisions defensible? Is there an appeals process planned?
Consider: Is job-relatedness documented? Are scoring decisions defensible? Is there an appeals process planned?
Question 9
Is there a plan to protect the security and confidentiality of assessment content — including controls on who can access, reproduce, or disclose items, scoring keys, or candidate responses?
Question 10
Is there a plan to obtain informed consent from candidates and to ensure their assessment data is collected, stored, and used in accordance with applicable privacy legislation and organizational policy?
Weight: 25%
Question 1
Does the planned assessment clearly align with the organization's current or future strategic workforce priorities?
Question 2
How feasible is the planned administration process — considering time, technology, staffing, and candidate experience? (1 = Major barriers, 5 = Fully feasible)
Question 3
Is the planned investment in developing this assessment — including design, piloting, training, and ongoing maintenance — proportionate to the anticipated value it will deliver?
Question 4
Is the assessment likely to be perceived by candidates as a relevant, fair, and worthwhile experience — one that reflects well on the organization and is unlikely to cause undue stress or disadvantage?
Question 5
How likely are hiring managers and key stakeholders to trust, accept, and act on the results of this assessment once it is in use? (1 = Very unlikely, 5 = Very likely)
Question 6
Has the integration of this assessment with existing HR systems, workflows, and data management processes been considered and planned for?
Question 7
How well defined is your plan for using assessment results to inform downstream talent decisions — such as onboarding, development planning, or subsequent selection stages? (1 = Not defined, 5 = Clearly defined)
Question 8
Is there a plan to collect job performance data after hiring and to test whether assessment results predict on-the-job outcomes — using that evidence to refine or revalidate the assessment over time?
Question 9
If the assessment involves human raters or assessors, is there a plan to train them on rating scales, behavioural anchors, and common rater errors before administration begins?
Question 10
Is there a governance plan for the assessment — including designated ownership, a review schedule, and a clear process for making decisions about revisions, retirement, or replacement of the tool?
Target Job / Role
—
Purpose
—
Assessment Type
—
Organization Type
—
Region
—
Administration
—
Duration
—
Scoring Approach
—
No. of Competencies
—
No. of Assessors
—
Intended Scope
—
Anticipated Volumes
—
Construct(s) to be Assessed
—
Job Level
—
Language of Administration
—
Assessment Description
—
Design Readiness Spectrum
💡
Early Concept
Idea only, major gaps in planning
📐
In Development
Solid foundation, some gaps remain
🏗️
Build Ready
Well-planned, ready to develop
Results by Dimension
Observations & Recommendations
I'd love to hear how this tool worked for you. Please share your feedback with me through
LinkedIn.