When I look ahead throughout 2026 and beyond, I don’t see small, incremental adjustments. I see a sector that is being reshaped at its foundations.
The biggest forces impacting assessment are not happening within our industry, they are happening in the professions we serve. AI is changing how people work, how roles are structured, and what competence looks like in real practice. That disruption is already here, and it will only accelerate. Our responsibility is to ensure the credentials we award remain relevant and trusted in a world where the nature of work and study – and the pathways between them – continue to change.
At the same time, technology is advancing the tools available to those who seek to exploit our systems, often preying on test takers who are persuaded to pay for cheating services. Security threats are evolving quickly, and the opportunities for misconduct are growing more sophisticated. So the pressures we are under most for the future are clear: keeping credentials aligned with real-world practice and protecting the integrity of the testing process that underpins them.
How AI is reshaping competence and what we assess
AI is already enhancing the assessment process, generating content more efficiently, analyzing large datasets, and enabling more innovative security controls. Those improvements matter, but they are not the real disruption. The more fundamental change is happening in the workplace.
Across radiography, architecture, accounting, construction, education, and many other fields, AI is reshaping the day-to-day. AI tools are increasingly being used to automate tasks that were once performed manually. Information that once had to be memorized is now instantly accessible. As a result, the skills that matter most are no longer about retaining knowledge, but about application: interpreting information, exercising judgement, and overseeing the outputs of automated systems.
If knowledge recall is no longer the primary marker of competence, then assessment must evolve. We will need to measure how people apply knowledge in context, how they collaborate with AI tools, and how they make decisions when technology is embedded in their workflows. The greatest impact of AI will be felt not in how we create assessments, but in how we define competence itself.
How assessment will evolve: from one-off exams to continuous evidence
These shifts in competence naturally lead to changes in assessment models. One of the clearest trends I see emerging is the movement away from single-point, multiple-choice exams toward more continuous, real-time demonstrations of competence.
Early adopters are already exploring practical tasks, performance data, and real-time activity to build a richer, more accurate picture of capability. Over time, I expect these approaches to become far more common.
This shift will be complex. Assessment sits within a broader ecosystem of education, training, supervised practice, and regulation. Before someone sits for a licensure exam, for example, they have completed years of preparation. Changing the assessment model means every part of that ecosystem must move with us.
As we saw with the redesigned TOEFL exam, meaningful change requires long lead times and deep engagement with educators worldwide. But the direction is clear. Certification and licensure will become less about proving competence once every few years and more about demonstrating it continuously.
Workforce uncertainty and emerging skill needs
All of this is unfolding against a backdrop of workforce uncertainty. The number of roles that can be automated is increasing, particularly entry level roles. Some jobs will disappear, many new ones will emerge. However, right now we know more about the jobs going away than the ones being created. This leaves workers and employers caught between two realities:
- Traditional skills are no longer sufficient.
- New skills are not yet clearly defined.
Even AI literacy, one of the most frequently mentioned emerging competencies, has no universal standard. Different platforms require different forms of expertise, and we lack a shared understanding of what competent use of AI looks like in practice. Compounding this challenge, those platforms are evolving so quickly that competence today does not necessarily mean competence tomorrow.
This is precisely why research will be so important in the coming years. We cannot rely on assumptions. We need hard data, calibrated methodologies, and rigorous analysis – underpinned by the science of measurement – to define these new skill areas and determine how to measure them responsibly.
What will remain constant
While the skills needed in the workplace or in education may shift, one thing will not change: the science of measurement. Psychometric principles still apply, even as tools and contexts evolve. Just as the principle of gravity didn't change when we moved from horses to cars or began to fly, the foundations of assessment remain constant. Fairness, validity, reliability, lack of bias, and ethical practice continue to apply, regardless of how skills themselves evolve.
In the near term, I expect to see more focus on:
- applied skills and decision-making
- oversight and management of AI-assisted tools
- digital task performance
- durable, transferable skills that support adaptability
Our strength lies in our ability to measure new competencies using sound, defensible methodologies grounded in decades of science.
How ETS and PSI will evolve to lead
ETS and PSI are in a uniquely strong position to guide the sector through this period of transformation. Not because we chase innovation, but because we ground it in science, evidence, and real-world impact.
Central to that leadership is the work of the ETS Research Institute, which is focused on three critical priorities as assessment evolves in an AI-enabled world:
- Defining the competencies that matter in an AI era
As work and study change, we are focused on identifying which skills and forms of competence truly matter. And how those expectations are shifting across professions, education, and the workforce. - Creating a new paradigm for how measurement is carried out
This includes rethinking traditional assessment models, exploring new ways of capturing evidence of competence over time, and ensuring that emerging approaches remain valid, fair, reliable, and defensible. - Conducting policy research that informs the responsible use of innovation
Innovation does not happen in a vacuum. Our research is designed to support policymakers, regulators, and institutions as they navigate the implications of AI for assessment, credentialing, and public trust.
This work is strengthened by the breadth of data and insight we gather across hundreds of professions, industries, and geographies, giving us early visibility into emerging trends. But leadership also requires discipline: staying anchored in science, guided by evidence, and focused on what the data tells us, not hype or speculation.
AI governance will be essential. So will innovation in test security. I expect fears of item exposure to diminish as AI enables infinite item pools and near-unique exam forms for each test taker. And as digital and biometric data mature, we will be able to identify impersonation and proxy testing through behavioral signatures that cannot easily be replicated.
Integrity will remain the cornerstone of trust, and technology will give us new tools to uphold that trust.
The future of assessment
The future of assessment will be defined by relevance, integrity, and the ability to evaluate real competence in a world where AI is fundamentally reshaping work. The organizations that succeed will be those that stay anchored in science, driven by data, and focused on the needs of learners, workers, employers, and the public.
ETS, supported by PSI’s operational strengths, is well-positioned to help lead this transition. The path ahead will be challenging, but it will also be full of opportunity. If we stay focused on the fundamentals, we will not only adapt to the future of assessment, we will help shape it.