International Society for Intelligence Research (ISIR) 2025 Annual Conference

Leo Hesting, Northern Michigan Mensa

Introduction and background

This year the International Society for Intelligence Research (ISIR) met for its annual conference from July 23 to 26, 2025, on the campus of Northwestern University, in Evanston, Illinois. The occasion marked the 25th anniversary of the ISIR, which was founded by Douglas Detterman, who also founded the academic journal (peer-reviewed and all that) Intelligence in 1977. Both the journal and the society have the same goal: to pursue and support rigorous, careful, scientific study of human intelligence—study unfettered and unmotivated by political or social pressures or trends. “Follow the truth” seems to be the motto of every ISIR member I’ve ever met and every ISIR presentation I’ve ever attended. And though there are, of course, disagreements in the field—some very serious, with passionately held opposing views—this standard of “good science”, along with apparent collegiality, is well-adhered to.

The ISIR is a small organization as academic societies go. Compare the 150 or so attendees at a typical ISIR conference (this year’s was smaller, read more below) to, say, the American Education Research Association (AERA), which held its annual conference/convention nearby in downtown Chicago in 2023. The AERA conference had 15,000 attendees, thus dwarfing ISIR’s conference by a factor of 100 to 1. As a fellow ISIR conference attendee put it: “ISIR doesn’t have the most people, but it has the right people.” He’s right about that; if you are interested in human intelligence, you’ll find the well-established lions of the field at ISIR meetings and in associated/affiliated publications.

I mentioned unusually low attendance at this year’s conference. The ISIR has held its annual conferences in Europe, North America, and Australia; generally for the past 10 years or so alternating between Europe and North America. Specifically:

  • 2025: Evanston, Illinois
  • 2024: Zurich, Switzerland
  • 2023: Berkeley, California
  • 2022: Vienna, Austria
  • 2021: Virtual/on-line
  • 2020: Amsterdam, Netherlands – cancelled
  • 2019: Minneapolis, Minnesota
  • 2018: Edinburgh, Scotland
  • 2017: Montreal, Canada
  • 2016: St. Petersburg, Russia
  • 2015: Albuquerque, New Mexico
  • 2014: Graz, Austria
  • 2013: Melbourne, Australia
  • 2012: San Antonio, Texas
  • 2011: Cyprus
  • 2010: Alexandria, Virginia
  • 2009: Madrid, Spain
  • 2008: Decatur, Georgia
  • 2007: Amsterdam, Netherlands
  • 2006: San Francisco, California
  • 2005: Albuquerque, New Mexico
  • 2004: New Orleans, Louisiana
  • 2003: Riverside, California
  • 2002: Nashville, Tennessee
  • 2001: Cleveland, Ohio
  • 2000: Cleveland, Ohio

This year’s conference was held on the Northwestern University campus, academic home to Bill Revelle, who received the ISIR’s Lifetime Achievement Award this year. Bill’s achievements and contributions are too long to list here but include the creation of the world’s largest free online IQ test (International Cognitive Ability Resource or ICAR) which has been taken/used well over 2 million times, valuable and widely-used computer code/tools (the “psych” package in R), and a long list of findings and publications. I can personally attest to Bill’s helpfulness and am glad he won the award.


William Revelle is a professor of psychology in the Department of Psychology of Northwestern University. See his entry in the Personality Project. In 2025, he won the Lifetime Achievement Award from the International Society for Intelligence Research (ISIR).


The ISIR, focusing as it does on a vital/critical area of study that also takes a lot of heat (my own son’s first comment about the/my work was “What about the ethics of IQ testing?”), draws support from some pretty heavy hitters. For example, Steven Pinker gave the introductory lecture at the 2023 annual conference. This year’s chief speaker was Claire Lehmann, founder and editor of Quillette magazine. These and other folks, not themselves primarily intelligence researchers, are well familiar with discovering and stating truths, detested and/or combated by opponents who feel or fear that said truths might lead to repugnant outcomes.

This year, another luminary—James Heckman, Nobel-prizewinning economist—gave a pre-conference talk, partly to inform but also specifically to present his preliminary findings to psychologists (most ISIR members come from this field) and other experts to get their opinion. In this he succeeded—for example, Bill Revelle was able to point out a critical fine point in the extensive data Heckman presented, right there on the spot. I was able to spot the anomaly also—but only after Bill pointed it out. 🙂

A preliminary note

I had heard this before, but at this year’s ISIR several presenters/speakers happened to make the point, apparently independently. Though their backgrounds and credentials vary, most of these folk are PhD psychologists by degree and training. For whatever reason—probably, I guess, because of a combination of personality and academic training—many researchers in the field flatly state something like

Or they’ll describe themselves as “personality psychologists with a focus on intelligence.”

It’s a good point of view. Besides being appropriately human-oriented or humanistic (we are, after all, more than just “smart, dull, or somewhere in between”) this point of view, commonly articulated, leads to collaboration with others. Focusing purely on g (short for “the general factor of intelligence”), to the exclusion of other factors or personality, would be suboptimal.

Day 0: Wednesday, July 23—evening “pre-conference”

ISIR conferences are customarily preceded by a special talk and get-together sponsored by the Institute for Mental Chronometry (IMC), which is a long-term funder and supporter of the ISIR. Mental chronometry is the study of the timing of mental processes. The speed of cognitive processes is an important component of intelligence. Thus, the IMC’s purposes are not identical to ISIR’s, but there is considerable overlap. Such overlap between academic fields is typical, as you will see from some of the other presentation summaries, below.


James J. Heckman received the Nobel Memorial Prize for his work on the Heckman Correction and its broader impact in 2000.


This year’s speaker, James Heckman, has expanded his own research a bit, as economists are fond of doing. (The best-selling book Freakonomics was written by economists who wrote of many topics/matters outside the normal purview of economists.) Heckman has begun collaborating on a research study in remote regions of China. He explained in his talk, the phenomenon of “left-behind” children in rural, relatively poorer areas. This “left-behind” is kind of a version of the “brain drain” that is common here in the USA. (and which has been the norm throughout all civilizations worldwide for all of history) in which able people travel to centers—typically cities and/or countries—for economic and other opportunity. In the case of China, it is common for one or both parents to travel for extended durations—commonly years to more than a decade—to earn a better living. Most of the migration occurs from the remote western regions to the eastern/coastal areas where China’s economic development is largely centered. The children are left behind with, sometimes a mother, or if both parents migrate, grandparents and/or other family. This phenomenon is well-known, well-documented, and somewhat well-studied.


Heckman and other researchers from the University of Chicago are working with the Rural Education and Child Health project (China REACH), a groundbreaking study that will guide national parenting and nutrition programs throughout rural China.


Heckman presented a lot of information—too much to deliver or absorb in just an hour—but the gist of his research is unusual and rather interesting. Along with Chinese colleagues including Zhe Yang, Heckman is conducting a study of “left-behind” and other rural Chinese schoolchildren, with a focus on determining something I’d never thought of or about: whether and to what extent they know their own consumer preferences. This is a “longitudinal” study—meaning it is being carried out by monitoring/assessing/working with the same subjects/children over a long period of time. So the researchers will ideally be able to track how these children’s consumer preferences change over time.

What’s unusual about Heckman’s study is that it calls into question, or perhaps “explores,” a fundamental assumption that economists standardly make: that each consumer actually knows their own preferences. There are some assumptions that economists classically make—-such as the assumption that every consumer will always act in his/her own best interest—that are actually known to be false. This could be another one. On the face of it, it seems silly. Of course, I know what I want! How would I not? Or in the case of children, it may seem even more obvious. A kid wants what they want: for example, they know that they want that lollipop, and they know that they want the red one.

But it turns out that this is not always the case in fact. Sometimes we don’t actually know what we really want. Advertisers and salespeople of course are skilled at manipulating desire. And marketers are used to surveying/determining/assessing consumer preferences. But whether you’re assessing consumer preferences via surveys or other market research tools, or via actual consumer behavior (e.g., sales data), there could still be holes in your logic, errors in your calculations, inaccurate ideas of “where people are at and what they want.”

If you think about this, it just makes sense. Parents from time immemorial have worked the trick—more or less successfully—of “steering” a demanding child’s desire with such tactics as suggesting “But, wouldn’t you really like that very nice other thing?” And we all know people who’ve said “I just can’t make up my mind—I really don’t know just what I want/prefer.” Maybe we’ve said this sort of thing ourselves.

Heckman and his colleagues have devised ways to measure how well an individual—even a child—actually knows his or her own preferences. Through this work, Heckman and colleagues have found—perhaps unsurprisingly—that the higher the child’s IQ, the more accurately they are able to assess their own desires. Thus, the tie-in to intelligence research. Heckman knows that there’s a lot of psychology involved in this work of his, he freely stated that he’s an economist and knows little of psychology, and he wanted to present his work to a room full of experts on psychology, personality, and IQ, and get their views and insights. In this he succeeded.

One last note about Heckman’s work. Those of us who have worked with “Chinese nationals” are generally aware that the Chinese government and more specifically the Chinese Communist Party (CCP) has great interest in certain aspects of human motivation, behavior, and malleability. Of course, Chinese scientists are doing good work in all sorts of fields; but in such fields as this one (Heckman’s work), it’s always interesting to ponder: “What might they be making of this?”

Day 1: Thursday, July 24

Detterman tribute and interview


Douglas K. Detterman is a psychology professor at Case Western Reserve University. He founded the journal Intelligence in 1977 and the International Society for Intelligence Research in 2000.

Read his CV. Listen to an interview.


The day kicked off with an honorary session honoring the aforementioned Doug Detterman, founder of ISIR. I had heard of him and read some of his work; it was nice to be treated to a list of some of his achievements. These testaments to someone, deemed to be excellent, can drag on, be boring, or—as was often the case during my career in Corporate America—be false and/or overblown to the point that they’re turnoffs, having the opposite of the desired effect, discrediting institution, speaker, and honoree alike.

Not in this case. Detterman’s work and lifetime achievements were actually interesting, and so was the presentation. Detterman himself tuned in “virtually” via video link and he spoke briefly. Turns out, that’s typical—apparently, according to those who know him, he’s soft-spoken and, though inspiring and excellent as a mentor, a man of few words.

What I remember from his brief remarks was in regard to a question about artificial intelligence (AI). Now, from outside Detterman’s talk I already knew that definitions of intelligence, though they all have a lot in common, vary quite a bit. “Just what exactly is ‘that thing’ anyway?” is a common question. We all know that it exists—even small children can identify it, call it out, detect it, and to a pretty surprising level, assess it. (I once knew an 8-year-old who called her 6-year-old sister “the smart one” in her family—she was right.) But to define it? Not so easy.

Detterman’s comment about AI was, that whatever it may be, it isn’t the same thing as “what we know as intelligence”. Basically, he said, as it exists today and is projected to continue, AI amounts to the (machine’s) ability to perform very rapid calculations (to which I would add, “upon very large data sets”). True enough, good point. Without going into the various most common/popular definitions of intelligence (you can look them up)—it’s clear: given that characterization of AI, these two things, are not the same beast.

I wonder how many people noted Detterman’s excellent insight. Not that I’m the greatest listener in the room—that’s not what I’m claiming. (Nor by the way was I the smartest person in that room—not by a long shot!) But, simply, Detterman was, as per his reputation, very soft-spoken. He didn’t “make a point” so much as just “say a thing.”

Michigan State University’s Gifted and Talented Program

The core material of the conference began with a number of presentations given by Ersie Gentzis, Kayla Whitley, and Leah Cameron Jansen, who are graduate students from The Hope Laboratory at Michigan State University, reporting their findings regarding a sample of “gifted” (a standard euphemism for “high-IQ”) students who participated in one of MSU’s Gifted and Talented Education (GATE) program. Their findings were interesting, describing various aspects of this population. They spoke of motivation, psychological factors such as anxiety/depression (apparently these are linked), academic interests, and the power of joy. These topics are of interest to me as I worked with such youngsters in similar programs.

It is too easy to dismiss academic research. Much of academia consists of exploring topics that you might figure “are just common sense”. So for example, a common response to the question “does academic motivation predict how the student’s talent develops?” might be “Well, duh. Of course it does, dummy – nothing to learn here.” Similarly for exploring effects of depression/anxiety.

But if you take the inquiry a little bit further, asking “Does academic motivation predict a student’s talent development, and if so, how and in what way?”—the question gets more involved, and more interesting. The answer, if there ends up being one, is not necessarily obvious at all. Or, regarding “all that mental health stuff”, such as anxiety/depression. When I worked with high-IQ youth decades ago, there was still an old canard (actually it still hangs around a bit even today) that very smart youngsters, were emotionally/socially crippled. I quickly found out that that wasn’t the case and that the smart kids in our program, were at least as adept (and actually more adept in my view) socially, as a normal/average/representative population of kids their age. And since then, my own “naïve” or “snap” observation, has been confirmed by research.

Now that is a digression away from what was presented at this year’s ISIR conference, but it’s an example of how academic research can benefit people—real individuals living in the real world. In this case, students in a gifted/talented program. Learn from the research, see if it passes the “sniff test” and fits with what you do, try it out, maybe incorporate what you’ve learned into your own program. For the benefit of everyone involved.

Screening Tests for Gifted Youth

Al Mansor Helal from the University of Arkansas’ Department of Education Reform explained his findings regarding which screening test works better for identifying talented youth. Again, this topic is near and dear to me. Here in the state of Michigan, which zeroed out all state funding for gifted/talented youth back in 2016, it can be hard for teachers, parents, and other child advocates “on the ground and in the field” to even carry out that first task of identification. My whole reason for joining and attending ISIR in the first place was (and still is) “To support them, you first have to identify them.” Probably you, dear reader, already know—you can’t (necessarily and accurately) determine how smart a person is just by talking with them, and certainly not by looking at them. If you really want to know, then you really have to test. But what test to use? Here in Michigan I hear from teachers “I’d really like to use test X but my school doesn’t have the budget so I make do with test Y”.

Again I digress—”sorry not sorry”, I want to make the point that much of ISIR members’ work, has practical relevance/application.

Helal showed which of the two instruments he had studied did the better job of identification. Good job; unfortunately, the practical significance was kind of wiped out by the fact that the publisher of one of the instruments has discontinued it. So that is kind of work gone down the drain, producing information no one can use. Alas. Wouldn’t it be nice if that never happened to us?

We also learned that participation in gifted/talented high school programs in Arkansas is an indicator, but a distinctly worse indicator compared to “did they enroll in at least 2 AP classes?” of whether the students go on to attend college, 4-year schools, and high-prestige (e.g., Ivy League) colleges/universities. Makes me wonder about the value—and the dilution—of those programs.

Teaching Math to the Gifted

Unfortunately, a presenter I very much looked forward to hearing and learning from, Henry Mudenda from the Zimbabwe Open University, was unable to attend this year. At last year’s ISIR conference in Zurich, Henry presented some important and (to me) fascinating material. I looked forward to his talk, which was to have been on “how to improve math teachers’ ability when teaching mathematically gifted students.” It is always good to get an international perspective on things, and this area is—again—one of direct interest to me, as I have taught math to just this population: gifted students. I can tell you for sure—just in case you’re interested—it is not simply a matter of “teach them the same material, just faster”. Hopefully Henry will be able to make it next year in Warsaw: I would like to learn from him.

The Flynn Effect (Secular Trends in IQ)

I suppose that if you think of Vienna you might—if you’re of a certain age and background—think of the Vienna Waltz. But it turns out that the Viennese have a good research program there, at the University of Vienna. Four researchers (Jakob Pietschnig, Alina Bugelnig, Jonas Lesigang, Sandra Oberleiter) from their program gave presentations related to “The Flynn Effect.” James Robert Flynn started out as a “moral philosopher” (and competitive runner, and civil rights activist, among many other things) and then got into intelligence research. Looking over a bunch of national studies of IQ carried out over many decades—close to a century actually—Flynn realized that the average “raw score” on any given test, kept on rising (a secular increase). Tests were periodically “re-normed” to counteract this, but if you just went by the raw scores, it would seem that worldwide and overall, everyone was getting smarter, and really fast: like, somewhere between 3 to 5 IQ points per decade—depending on the population and measuring instrument (meaning, the particular IQ test). Toward the end of his life, Flynn himself—along with other researchers—noticed that the effect may have been slowing, topping out, or even reversing (secular decrease). People were no longer scoring higher and higher as time passed. Or, maybe they weren’t, or maybe only on certain tests. Both before and after Flynn died in the year 2020, a lot of work has been done, researching the Flynn effect. Some of the latest—and most in-depth and “powerful”—was presented at this conference.

One study looked at both cross-sectional and longitudinal assessments. Careful analysis showed that the question: “Was the Flynn effect real, and has it reversed?” may best be answered “It’s complicated.” The presenter is well-respected and well-known for his strong advocacy (and robust supporting evidence and analyses) of the proposition that “the Flynn effect is a real thing.” Now he has changed his message, or conclusions, slightly. Basically, he looked deeper into the results of various IQ tests and found that while it is true that the full-scale IQ (FSIQ) scores have been rising, performance on individual subscales and portions of the tests have varied in more complicated ways. Since the FSIQ is a kind of a summary of all the component subtests and portions, this means that a lot depends on how you do your summarizing.

The Flynn Effect is interesting, but it doesn’t affect my work very much. My mission is to identify talented youth, here and now—so the slow change in average IQ scores over time does not affect that.

The Austrians presented other interesting and related findings. For example:

  • Austrian (military) conscripts in more recent cohorts completed more items overall but also made a higher number of errors. Here is a clear example of a result that is/would be affected by your scoring policy. If you deduct points for wrong answers (as some, but not all, tests do) you may find that there is not increase in scores at/after all.
  • Similarly, changes in “attention performance” may be responsible for changes in IQ scores.
  • Prosperity does appear to matter. “Prosperity” of course is basically the first thing that comes to mind when you first learn of the Flynn Effect. It seems intuitive that a better-fed, better-educated, healthier population would do better on IQ. tests. But this simple explanation has (in other studies) been shown to be insufficient. However, “there does exits a Flynn Effect of Executive Function and this is strongly influenced by country”. Curious to know more? You’re going to have to read the paper(s) because I couldn’t take notes that fast. Suffice it to say that the authors examined the results of over 38,000 tests published over the period 1946-2025. That’s a lot of work.
  • Another presenter pointed out a number of things. Firstly, studies of the Flynn Effect are typically “opportunistically using incidental data”—not originally collected for these purposes, which may complicate things.
  • Another complicating factor—about half of the items typically used on the test are “subject to item drift.” For example, the decline in facility with numbers is approximately 3 IQ points/decade. This implies “a weakening of the positive manifold”. The positive manifold is basically the original observation that you can get a good idea of a person’s “overall” or “full-scale” intelligence by examining their performance on any number of cognitive “challenges” or tasks—they’re all related. This in turn is foundational to the conception of “intelligence” or g as we know it. So, a weakening of the positive manifold, if it is occurring, would have some significant real-world effects.
  • Related to, or a possible characterization of, the above: “People are becoming more specialized, certain skills are becoming more important.” Another way of saying this: people’s cognitive profiles are becoming more asymmetric.
  • This brings up the concept of using people’s “cognitive profile” as opposed to FSIQ.

Regarding all the above, a (respected) lion in the field commented that you get a better assessment of g with or using as assortment of assessments, but only within a culture. So the WAIS (Wechsler Adult Intelligence Scale—probably the most highly regarded instrument) is a much better measure of g within the U.S.A. than, for example, the Raven’s Matrices. However, the WAIS is not cross-cultural.

That last comment fits right in with my own efforts and direction, as I’m using not a single test, but a battery of approximately 8 of them. More about those later, but it’s nice to get confirmation that I’m on something like a “right track.”

I will describe the last Austrian’s presentation later on—for now it is time to move on to the next category.

IQ Tests for Autistic Youth

Many of the last presentations of the day, on Thursday, were a kind of loose collection of measurement-related topics. Audrey M. Scudder from the University of Connecticut reviewed studies that showed that the IQ scores for autistic youth depend heavily on which test was administered. They concluded that practitioners may want to consider alternative or expanded methods of cognitive testing for children and adolescents with autism spectrum disorders.

Berlin Numeracy Test

Next, Saima Ghazal gave a presentation on the Berlin Numeracy Test. Its name doesn’t really describe it well; it’s basically a short (2 minute) assessment of people’s ability to assess risk. You can learn more about it at www.riskliteracy.org, which is a nonprofit university-based project designed to help increase awareness about risk literacy (i.e., the ability to understand, evaluate, and make good decisions about risk). The goal of the project is to increase people’s “practical ability to evaluate and understand risk in the service of skilled and informed decision making.” Seems like a good effort and I liked the test. However, a respected colleague of mine asked some probing questions (that mostly went over my head) and appeared dissatisfied with some of the claims. Personally, I can use this test—there is a 4-minute paper-and-pencil version that the authors make freely available. (A big thank you to them!) This will go into my toolkit. It does appear to be a measure of “something,” and I bet it is related to g (because, what isn’t?).


Riskliteracy.org offers a free 30-minute program, suitable for classroom use, intended to teach people to understand and evaluate graphical representations.


Basically the test evaluates peoples’ ability to do some quick probability calculations—3 or 4 of them, I believe. The project is an attempt to get people to make better choices in their “real lives”—when for example deciding whether to take out a mortgage or indulge in risky behavior. The online test is, I guess, designed to get them thinking along more sensible lines, or something. Personally, I’ll be able to incorporate it into my own battery of tests since it’s free. A colleague opined aloud that the test was too “pat”—he saw some statistical correlations that looked “fishy” to him. When he tried to explain briefly to me, I couldn’t quite follow, as he has years of formal study in statistics that I lack (he has a PhD and has been using/practicing those techniques for a decade or more).

Step 1 and Step 2 Examinations (Medical Students)

Then, Nathan Kuncel from the University of Minnesota gave a very interesting presentation examined various screening and prediction tools that are used in determining who gets into which medical residency programs. Students in medical schools in the United States pass two important examinations:

  • Step One evaluates the medical students after they complete the academic/coursework (or “book learning”) portion of their training
  • Step Two evaluates their competence in actual work/practice with patients.

Historically, the top medical residency programs (e.g., Mayo Clinic) accepted the top scorers on these tests (especially Step 2) to their residency programs. But “for reasons” (as the young say), things have changed somewhat. For one thing, the numeric scores themselves are no longer reported—only a single or binary “pass/fail”. Also, various parties have advocated for other evaluative elements (e.g., letters of recommendation) be considered to provide a “comprehensive” or “360-degree” view of the student. Nevertheless, the presenter showed rather neatly that the best predictor of a medical resident’s performance, is their scores on the Step 1and Step 2 examinations.

He briefly explained why: for example, a common “deal” in the field is “Work in my (research) lab for free for a year and I’ll give you a letter of recommendation.” Other factors include physicians’ (and program directors’) egos. “Telling them that Excel provides a better indicator than their professional judgment is typically not well-received.”

“Adaptive” IQ Testing

Kristóf Kovács of Eötvös Loránd University (ELTE) in Hungary gave the day’s penultimate presentation, which was another very important announcement. Now completed and available is a new “adaptive” test, suitable (for example) for use by Mensa UK in determining whether a test-taker meets Mensa’s “top 2%” criterion. Now, “adaptive” means a number of things, but one that’s relevant in this case is that and adaptive test is harder to cheat on as there is no answer key. You can’t just find a copy of the test online, or buy one, memorize the answers, and walk in there, getting a top score. The adaptive test, as the presenter showed and described it, uses two different techniques; one of which I was pleased to see mirrors a technique I came up with in and for one of my own test batteries. The other technique presented will also come in handy for me; I’ll be able to use the scheme presented. Actually, this was a next test to add to my own battery, and the innovation presented will be of great value to me. This was a valuable presentation!

Testing for Rationality

In the day’s final presentation, Devin Burns of Missouri University of Science and Technology spoke about testing for “rationality.” This of course is a separate capability from g or “raw intelligence”—I think we all know smart people who hold irrational beliefs and/or do irrational things. Also, plenty of folks who are normal in intelligence, or even dull, do an excellent job of thinking and acting rationally—for example, avoiding stupid goofs in their personal lives. So, the literature reflects the difficulty of testing for rationality—which for starters, is not all that easy to even define. (“Not acting like a dumb s**t” would be how people ’round where I live would say it; it’s probably just as good a definition as any other.) The presenter concludes that a new (shorter) instrument they have developed, “suggest[s] a step toward a psychometrically sound and time-efficient toll for assessing rational thinking, with implications for research, education, and applied cognitive training.” Sounds good to me, if they can make it work. Personally, from my (brief) time spent as a student teacher in public-school classrooms I’d say, this is a tall order. The presenter would appear to agree with me; quoting him: “It’s hard to come up with objective measures of rationality.”

Day 2: Friday, July 25

This day started out with a one-hour “keynote lecture” by David C. Geary, respected intelligence researcher and author (he’s written for Quillette). I have mentioned that intelligence researchers take a lot of flack if and when they discover results that people don’t like, are offended (or outraged) by. One such “third-rail” topic is, the differences in cognitive ability between men and women.


David C. Geary is Curator’s Distinguished Professor of Psychological Sciences at the University of Missouri. His laboratory does longitudinal research on development of mathematical cognition in children, and associated learning disabilities. His studies in evolution currently focus on sex differences in vulnerability to stressors.


It’s commonly accepted that “men are better at mechanical stuff and women are better at people stuff.” Some folks consider that this is “environmental”—caused by “systemic (and invisible) sexism”—or by some flaw in society, culture, education, parents’ sexist attitudes, etc. This assumption has driven the enormous range of programs designed to “get girls into STEM.” One can argue how valuable and effective those programs are, but extensive testing validates that (at least in my neighborhood) men and women, on average, differ in spatial ability.

If one doesn’t deny that such differences might exist and might even be innate, then one can lump them in along with other impossible-to-deny sex differences (e.g., height, upper body strength, throwing ability) and thereby get into the study of sex differences generally. Or, more accurately—since this topic is too big and there is too much material for any one person to grasp—look for other kinds of sex differences.

David Geary’s talk was an example of a topic that is not particularly about g (or intelligence), but is related, for the reasons I just described. His results, which he presented, showed that “the wealthier/better off the country, the greater the sex differences.” David showed that over a period of some 400+ years (starting in the 1600s), as conditions improved, everybody (boys and girls) grow taller—but the boys get taller more than the girls do—twice as fast, in fact. So whereas women got 11 centimeters taller, men got 15 centimeters taller, over that time span. (I know, 15 isn’t twice 11 —I’m hand-waving on the math here, which has to do with rates of change as opposed to raw outcomes.) The same increase in disparity between the sexes is found for upper body strength and some other characteristics (e.g., spatial ability, throwing ability). Likewise, female mammals generally live longer than the males—but under difficult circumstances, the sex differences are smaller.

Probably many readers know that this is borne out at the social level in the Nordic countries. Those places, with high wealth, excellent health care and other “safety nets”, and near-perfect freedom of choice, see women going largely into “helping/people” fields (nursing, teaching, medicine) and men going into “thing” fields (engineering, IT, mechanics).

David also mentioned in passing that the female advantage in verbal ability is “laid down prenatally”. Also, a ton of other stuff. For example, populations/groups that were subject to frequent raiding 500 years ago still differ in the level and nature of “social constraints” right down to today. How that’s related is that social constraints reduce individual differences, therefore also reduce sex differences.

In academia generally—and in plenty of other venues or environments where the “blank slate” point of view is “required thinking”—you really need to be careful, when even considering whether or not to present findings such as these. Twenty years ago, Lawrence Summers was forced to resign from the presidency of Harvard University after he suggested that one possible reason that should be considered for men’s dominance in STEM fields might be that there are more men at the very top level of ability because of the larger variance in ability among males. Unacceptable!

The nice thing about the ISIR conference(s) is that there is none of this sort of censorship, nor punishment of “incorrect expressions/findings/presentations.” David’s was an interesting talk, and I learned from it. I didn’t know that part about mammals, for example.

“Greedy” vs “Flexible” Careers

Following David’s keynote address, Gabriella Noreen from Vanderbilt University gave a presentation on some related topics, such as how people evaluate themselves and the choices different people make in their lives (and careers). In particular, treating the topic of “greedy” versus “flexible” careers. A “greedy” career was defined as, one in which, the job will suck up all the time you care to devote to it. But—and this is key—putting in, say, a 60-, 70-, or 80-hour week increases the monetary reward, a lot. There are plenty of careers like this—you can probably list some off the top of your head. “Top performers earn top rewards”, not only in hourly take-home and/or bonuses, but also in promotions to higher-paying jobs. Other, “flexible” careers (think elementary school teacher for example) are much more, well, flexible. So for example you can take off to have a kid and return to teaching relatively easily. You lose some seniority but you still do OK. Similarly, you can work half-time, or part-time, or as a substitute. There’s a lot of flexibility available if you need/want it; but, the pay is not generally that great, and it doesn’t—again, generally—get better the harder (or more hours) you work.

Using data from a famous longitudinal study of high-I.Q. persons, and looking at the choices they made throughout their careers, the presenter showed that a lot of factors go into these choices. Not surprisingly there is a sex difference. In a “young couple” scenario where both are earning similar amounts, when it comes to having a family (children), fairly often the husband will work longer, harder hours, with the/his resulting income boost more than doubling, meaning the family’s better off financially than when both husband and wife worked more “flexible” jobs.

This is all a plain-language summary of the presentation, but there was much more to it than I could note, as a lot of the terminology and findings went right/way over my head. Looking at my own notes from the day, I still need to figure out what “constructive replication” and “disordinal divergence” even mean.

Spatial Cognition

Kayla Garner from Northwestern University discussed spatial cognition and “geographic-contextual indicators”, pointing out for example that folks from rural areas, and/or cities with irregular street layouts (think London versus Salt Lake City) have better navigation skills. Not really too surprising, that; but new and interesting to me was the claim that greater “spatial cognition” is (positively) associated with cognitive health. I have heard that spatial ability is important and have structured my own testing accordingly; but I didn’t know that it had any benefits to individuals. Or, of course, maybe it doesn’t. It could be just another instance of the well-documented fact that g is associated with better health and longer lifespan generally.

Each year at ISIR, more work is presented in which researchers attempt to figure out, just what exactly is going on in human brains when people think; also, can we get an idea of just what “powers” greater intelligence? Kirsten Hilger from Julius Maximillian University Würzburg, Germany gave a presentation on the mapping of electrical activity and its relationship to intelligence. This material, what with all its mappings of electrical activity, goes way over my head, so, sadly, there is little I can relate of the presentation.

Intelligence: An Entity or an Emergent Property?

Michael Woodley of Menie gave the final presentation before lunch on Friday. It had to do with a question one encounters in this field: is intelligence “a thing” (or in fancier language, is it an entity, does it show “entitativity”) or is it just an emergent property of other things going on? It is possible that this debate might never be resolved to everyone’s satisfaction. Tangible things (height, upper body strength) are easy to measure and analyze, but invisible things (intelligence, love, devotion, honesty) are a lot harder. Generally, questions like these are as much philosophical as measurable. In the case of intelligence, the question is closely related to another question—how much does the person’s environment matter? The presenter suggested that perhaps we might be seeing (or encountering, or dealing with) two different but related things; specifically, that there might even be “two g‘s”—which proposition kind of came as a shock to the room. It’s well-established that g, or the “general factor of intelligence”, does in fact exist, and is generally associated with people’s performance on basically every cognitive task. So, the idea that there might be two discernible variants of g, while intriguing, remains a bit puzzling. On the other hand, the concepts of phenotype and genotype are well known. This line of enquiry is—as far as I know—in its early days, and there remains much to be explored.

Danish IQ Studies

Friday’s afternoon session kicked off with a symposium presented by a team of Danish researchers: Gunhild Tidemann Okholm, Martin Stolpe Andersen, and Rebecca Beatrix Clarke of the University of Southern Denmark. Denmark has excellent resources for the study of intelligence, as every male of draft age must undergo intelligence testing; this has been going on since WWII (if I’m remembering right), and the test questions have not changed. Therefore, it’s possible, for example, to study the effect of aging on IQ by contacting the very same individuals who took the test 40 years ago and getting them to take that very same test now. (The Danish test is a military secret and therefore not widely available, which is a pretty decent bar to cheating.) Lots of other studies are possible and are in fact carried out—many research results were presented. For example:

  • Regarding the effect of childhood adversity on intelligence: a single exposure to adversity doesn’t affect cognition, cognitive ability. (Long-term/consistent/repeated adversity can.)
  • Test-retest correlation of IQ is 0.79, which is very high, even when the interval is 40 years.
  • The number of years in which “extreme binge drinking” (defined as more than 10 units of alcohol) was practiced/done, has an adverse effect on cognitive ability.
  • Math and science scores correlate with the Danish Draft Board’s intelligence test, also known as the “BPP” (stands for “Børge Priens Prøve“—good luck pronouncing it just right), not with overall cognitive ability.

There was a lot more, but the big takeaway for me personally, was yet another assessment instrument that I’ll be able to use. Glad I attended! Thanks to the Danes!

A note about attendance

Speaking of Danes, and of Europeans generally who did or did not travel to this year’s conference. For a number of years, starting at least as far back as 2005, I’ve heard from time to time, from Europeans, that they hesitate to come to the United States for conferences. Reasons have varied over the decades, from “You (Americans) have made it so difficult” to—from what I heard at ISIR—actual fear. Apparently, Europeans are actually paying attention to the various press reports characterizing our President as “fascist”, a “tyrant”, and generally casting/characterizing him as hateful, evil, dangerous, a threat, crazed, deranged, stupid, an ogre, racist, tyrant, liar, manipulator, narcissist … and so on. Only 4 Germans came this year—an unusually low number for this conference. One of the Germans who did come, said to me “they only asked me two questions at the border—it was nothing.” That of course does make sense, but it does leave me to wonder: what were the Germans (and others) expecting? That they’d be handcuffed, taken away to dark cells and beaten? Questioned for days by shifts of interrogators using harsh lights and evil methods? Such things did actually happen (but not to me), back in the bad old days of Erich Honeker in the DDR (otherwise known as “East Germany”). That was scary but I still went, did fine, and by the way, I learned a lot.

It seems strange to me, this apparent acceptance of what to me, is clearly propaganda. Don’t they realize that there is a nonstop propaganda machine here that churns out “Trump is bad” nonstop, no matter what? But, perhaps it’s simply the case that academics are not particularly brave.

The Rest of Friday Afternoon

Fortunately for me and other attendees, at least some Europeans were actually brave—or perhaps well-informed—enough to make the trip, and this year’s conference. Maximilian Krolo of Saarland University discussed an ever-popular topic: the relationship between IQ and political interest, and political leanings—specifically, party affiliation. Now, of course it is common for a person to characterize members of opposing political parties as “stupid,” but these results were quite a bit more sophisticated than that, having to do (in part) with the difference between voting behavior, versus party affiliation, with IQ as a factor. The fact that the political parties weren’t just the ones we Americans are used to, was kind of a nice relief. I as a listener/audience member, didn’t “have a dog in this fight.”

Mental Health of Male versus Female “Tweens”

Finally, Madlena Arakelyan from Yerevan State University in Armenia, discussed and showed her findings about “gender disparities in mental health of gifted adolescents.” The results themselves were, to me, not particularly surprising—basically, boys and girls (or “male tweens” versus “female tweens”) who have any mental problems differ somewhat in their symptoms and diagnoses. To me, that finding was hardly surprising. Far more interesting to me is a valuable thing one gets out of these kinds of international conferences. The study the presenter referred to used wholly different concepts, measurements, and descriptions of “stress” than we in America are used to. Specifically, the measure is called Heart Rate Variability or HRV for short. You can learn more here: https://bioscaner.com/en/tech/hrv/ or here: https://pmc.ncbi.nlm.nih.gov/articles/PMC11763054/ and if you look there and/or elsewhere you’ll find foreign-sounding names (primarily Russian but also Chinese, and others) and some “interesting” English.

Armenia is of course heavily influenced by Russia. If you have ever traveled and/or lived in Eastern Europe, you may have found that whole traditions and bodies of medical practice over there can be different—in ways sometimes subtle but other times, kind of “majorly” – some of the differences may even seem extreme. In this case we find out about a whole way of looking at “the functional state of the organism”—a phrase which in itself sounds a bit “odd” to the American ear—complete with measurements thereof, and instruments, that we have never heard of over here, and that sound kind-of-plausible but also kind-of-strange.

It happens that I know this presenter personally from prior conferences; she’s a dynamic presenter. This time, she was kind of dismayed that no one asked her any questions, after her presentation. I and others explained to her that given her time slot—last technical/academic/detailed presentation in the day—attendees were just tired. That was true, but what I did not add, was that when a body of work—in this case some of her research findings that were actually interesting—comes off as being “too alien”, or in this case, founded on concepts and using measurements and devices that are “too alien”—it kind of stymies. You’re sitting there, kind of at a disadvantage, now knowing where even to start; for many of the things you as listener normally take as “givens” are replaced by other unfamiliar things. It’s a problem, but there is a solution: just chatting informally after, or during breaks between, the formal presentations. Which we did; or at least some of us to some extent.

The last session of the day was an hour-long interview with Bill Revelle, whom I have already described, so I will omit saying more about him here.

Day 3: Saturday, July 26

The last day of the conference was the biggest day for me, for three reasons. Firstly, the main speaker (and honoree) of the whole conference, that is Claire Lehmann, whom I mentioned before, gave her lecture, or really it felt like just a nice, friendly, informative talk. Claire Lehmann, founder of the influential Australian—but really, international—online magazine Quillette.


Claire Lehmann is the founder of the online magazine Quillette and a regular contributor to The Australian. Follow her on Instagram @clairelehmann. Here’s an article based on her speech to ISIR.


Depending on your political leanings, you may never have heard of Quillette, and/or if you have, you may hate it. For the past 10 years or so, Claire and her crew have been doing their best to debunk whatever they see as erroneous “common knowledge.” I don’t know anyone who “subscribes” to “their line,” for it isn’t clear that they have one. Rather, they attempt to publish well-founded and well-written articles that other outlets won’t. I disagree with some of the articles Quillette prints, but I consider the effort as a whole to be excellent and exemplary. You of course may disagree, but to hear Claire’s testimony about how she came—as a mother of a toddler, no less—to begin, then accomplish, this bold venture, was quite inspiring. Claire is smart, and she knows her thinkers, including the ones so prominent in the establishment of today’s (or actually, maybe more like “yesterday’s”—since the bloom is off the rose now) received wisdom. I won’t go on, trying to convince you of the excellence of a magazine you may never read, or may hate if you do check it out. Suffice it to say, it is nice to meet and converse with a person and put a face to “the editorial pen.”

Claire was awarded the ISIR’s Constance Holden award for excellence in journalism covering intelligence. To my surprise Claire mentioned offhand that she had never won an award before. I was glad to be there at the event! I have already written that intelligence research is typically viewed with great scorn, skepticism, and/or outright hostility, for reasons I won’t get into here. One upshot is, that journalism about IQ is typically slanted, biased, woefully out of date, and like I mentioned, hostile. More often the subject is just kept “in the closet” and you don’t see/read articles at all. Fortunately, publications like Quillette buck this trend and give voice to good authors presenting good (by which I mean true) information. Good for Claire and good for the ISIR!


The International Society for Intelligence Research awards the Constance Holden Award for excellence in journalism covering intelligence. Constance “Tancy” Holden was a science journalist and artist. She worked at Science from 1970 until her death in 2010. 


Next up on the morning’s program: me, Leo Hesting. I had 20 minutes to present 7 intelligence (and/or “screening”) tests that I will be using in support of my effort to support talented youth in northern Michigan, where I live. That’s not much time, obviously—to describe each test for even 3 minutes would have caused me to “run long,” which I did not want to do out of courtesy for presenters who followed me. Though I’d rehearsed to get things down to 18 minutes, I expounded a bit and ran the full 20 minutes. Upon realizing this, I quit right there and then, without taking any questions. (The heck of it is that the guy after me ran 7 minutes long with his own presentation which consisted largely of him reading his slides; mainly quotes from other authors. He basically lionized Hayek.)

Not having time for questions was OK; folks who had questions just caught up with me later. My presentation was well-received, and I got several compliments, all apparently genuine, which was nice. I have already mentioned that these folks are (some or a good portion of) the top intelligence researchers in the world, and from prior conferences I already knew, they can be a (very) tough crowd if something is not up to snuff. I wondered if I’d get laughed out of the room, or scorned, but that did not happen; so apparently, I did something right. Perhaps it was the fact that I actually handed out paper copies of most of the IQ tests I was describing. People like (tangible) freebies, and mine were the only ones given out at the conference.

My presentation, though first in the morning after Claire Lehmann’s address, was on the last day of the conference. About half the attendees had bailed by then. The younger folks often do this —fading after the first day, both fading in attention (you can see them zoned out, tired out) but also simply in both tardiness and just not showing up. That is par for the course. Something I already knew from prior conferences—the stalwarts, the true lions of the field, the ones who’ve put in decades of productive work (and for example written papers that have garnered thousands of citations)—they stick around.


Unlike most IQ tests, the tests I have created/adapted/reprinted are completely free for anyone to use, under a Creative Commons license. For more information, contact me, Leo Hesting.


Free IQ tests are very unusual in intelligence research, which operates according to the rule that “if a test gets out (published), it becomes useless.” For anyone could then get hold of it, study it, memorize the answers, and score as high as—or even higher than—a “genius.” How I guard against that with my own tests, while still making them freely and publicly available, was one of the chief topics of my presentation.

Twin and Adoption Studies

Matt McGue is a highly regarded researcher from the University of Minnesota; which has a strong research program, in part famous for its study of twins. Twins raised together, twins separated at birth and adopted out, monozygotic and dizygotic twins—also studies of children adopted into families. The potential for teasing out the various relative influences of genetics versus environment is huge here, and the Minnesota folks have very big data banks full of precise information, which they put to good use.


Matt McGue is a behavioral geneticist. He co-directs the Minnesota Center for Twin and Family Research at the University of Minnesota. He is a guest professor at the Department of Epidemiology at the University of Southern Denmark, where he studies aging.


Matt presented a number of findings; to me the most interesting one was not about twins but about adoptees. It turns out that if a child of normal/average intelligence, or even a little lower than normal, is adopted into a “high-achieving” family, the results are not what “environmentalists” (in the field of intelligence research this term has a special meaning: it means “persons who believe that it’s a person’s environment that defines/determines their intelligence”) would predict.

True that on average an adopted child’s IQ measures out at slightly higher—perhaps 4 or 8 points or so—than if they’d been raised with their birth parents, on average and when considering hundreds of adoptees and both their adoptive and birth environments and families. This effect is well-known and well-documented; it’s a slight effect and it fades fast, fading in fact to zero at about the age of 30.

But that is old ground that doesn’t need re-plowing, and Matt presented something else. It turns out that emotional well-being of these “normal” or “not-high-achieving” adoptees adopted into high-achieving households suffers. The effect sizes are small, but they are there. Evidently, being in a family where expectations are higher than you can reasonably achieve, given your own native capabilities, is stressful. It is hard to say what to make of this finding, from an “adoption policy” point of view, but the finding that adoptees in high-achieving homes do worse psychologically is interesting—it gives one (or at least it gives me) pause.

Meritocracy

Damien Morris of King’s College London gave one of those presentations that had “a little bit to do with” the field of intelligence research but delved heavily into social policy. The chief topic was, should society be structured as a meritocracy, and if so, what kind of meritocracy?

Intelligence and Suicide

Then came an interesting presentation—given by Jonathan Fries, another of the University of Vienna researchers. His was different from the other Viennese, having to do with intelligence and its relationship to suicidal ideation. He began by mentioning a claim or statement by some philosopher—I don’t remember who—to the effect that it takes intelligence to even conceive of the possibility of suicide. Therefore, suicide—or in this case suicidal ideation—should be more common among more intelligent people. Yet in reality, smarter people are less likely to even contemplate suicide. Or in the language of the presentation, the frequency of suicidal ideation is lower.

I liked the presentation because it did what academics should (ought to) do. That is, it didn’t just take the established (negative) correlation between IQ and frequency/rates of suicidal ideation at face value. Rather, the researcher attempted to determine, “just how might this work?” Or in academic parlance, “What might the mediating mechanisms, factors, behaviors, and/or habits of mind be, that make this happen?” Asking that question resulted in a pair of answers, both kind of obvious once you learn about them, but that you might not have thought about on your own:

  1. Individuals of higher intelligence tend to have better health (this is already well-established), which results in fewer limitations on not only their physical life, but also on their thinking and their prospects, and
  2. They tend to get out more and enjoy/practice/experience greater physical activity.

I don’t need proof of those points, especially not the second one. I was never depressed nor anything near it, but most of us have ups and downs, better and worse moods. My dear old dad would from time to time tell me—come, on, let’s go/get outside, you’ll feel better. Even during midwinter in snowy cold northern Michigan, Dad was right. So, I think that the presenter’s analysis, holds up.

I have mentioned that conferences are good in that you get people from different viewpoints, fields, and countries together. Often what is presented gets comments from these folks who specialize in other fields and have different life experiences, attitudes, conceptions, and life experiences. That happened this time with this presentation. Heckman, the economist, asked the presenter if what was responsible for lower suicidal ideation was not g (or the intermediating factors listed above) but rather, “the things that g can buy”—using “better health care” and “better life” as examples. His own “slant” (or “bias” or “outlook”) as an economist was clear in his question. But his comment about health care was swiftly met with contrary attestations by, first the Nordic and Canadian attendees, who informed Heckman that in their countries, the level of care from the public (government) sector or system was superior to what you get from the private sector. Heckman resisted this, using the common objection (which I have heard before) that Canadian residents “if they want better health care, come to the United States.” At that point Claire Lehmann, an Australian, told Heckman and the room that in Australia, it’s the fact that public-sector health care is, simply, superior. Of course, that doesn’t necessarily settle anything, but such discussion and information is, well, informative. And interesting.

Personality and Intelligence

Then came a 1-hour presentation by Bill Revelle, whom I have mentioned above. And here I must, sadly, fail to do even a halfway-decent job of recounting the wealth of material Bill presented. For one thing, I just can’t take notes that fast; for another, I was running out of steam, toward the end of a fairly long conference. But another, bigger reason was due to Bill’s own excellence. He presented some 60+ slides but during the presentation did something that I and a few other also did—showed us where we could download everything he was showing/presenting: his whole “slide deck”. If you want it—and/or nearly everything else Bill has done over a career spanning decades, you can find (at least a lot of) it at https://personality-project.org/sapa (you may need to look/hunt around a bit).

Bill described his outlook (he likes/tries to be a generalist), said that he’s “a personality psychologist who studies intelligence,” and mentioned some of his efforts, findings, and products—a few of which I have already mentioned:

  • He found that it is possible to manipulate the scores that test-takers get on GREs (Graduate Record Exams) by “arousal manipulations” such as varying the time of day, or administering caffeine. But before you go off advising anyone you may know who’s trying to get into grad school to drink 3 to cups of coffee first, it turns out that the (significant) effect of caffeine varies depending on the impulsivity of the test-taker. GRE coaches beware!
  • He developed the ICAR and the psych package, both of which I’ve mentioned.
  • He had a lot of help.
  • Serving in the Peace Corps in Asia is different from serving in the Peace Corps in Africa or Latin America. In Asia you learn patience; volunteers who’d served in Central America came back radicalized ’cause they’d see villages get wiped out, those who’d served in Africa came back with a sense of humor ’cause that’s what Africans have themselves—how else could you survive?
  • Serving in Malaysia he taught some children whose parents had been headhunters. He found ways to get these tiny villagers educated well-enough to pass tests getting them into top schools.
  • This caused him to doubt the idea of “cultural differences in ability,” since they were able to “move” scores by 2.5 standard deviations.
  • The Synthetic Aperture Personality Assessment (SAPA) project. It yields amazingly excellent data and is too long to describe here.

Several of the fine details of some of Bill’s assessments (meaning, the exact types of questions) will be very useful to me as I further develop my own suite of tests. Yet another win for me, a great benefit of attending the conference.

Gifted/Talented Students

Al Mansor Helal from the University of Arkansas then gave a second presentation. He explained the results of a study evaluating the performance of students enrolled in “gifted/talented” programs in that state. Focusing on whether or not these programs were associated with eventually enrolling in “selective” universities. Basically, the result I took away was that there was some association, but a better predictor—by quite a bit—was, whether the student had enrolled in at least two AP courses during high school.

The SAT® and the College Board

The penultimate presentation of the day—and of the whole conference—was excellent. The presenter was Paul Westrick, senior researcher at The College Board, which you may recognize as the company that provides the SAT®—a test widely used by colleges in determining whom they admit. Both test and company are frequently reviled, and in fact a few universities, a couple/few years ago, announced that they would no longer use the SAT. After a very short while they changed their tune and policy both, as the SAT, despite all the disparagement and claims of bias, remains the single best predictor of how well a high-schooler will do in college. (Combining the SAT with high school grades works even better.)


The College Board is a not-for-profit membership organization that publishes the SAT® (originally called the Scholastic Aptitude Test) and the Advanced Placement® (AP®) tests, among other tests used for academic placement.


With about 2 million students taking the SAT each year, the College Board gets massive sets of data/results, which enables them to do some pretty fine analysis. Paul’s presentation this year regarded the practice of re-taking the SAT in hopes of, or in order to, improve one’s score. Hint: it rarely goes up very much, and in fact at the highest levels (around 760), you’re more likely to do worse on a re-take than you did the first time. How the colleges interpret all of this is up to them. There were some other interesting findings, but basically, in answer to the question “Does it really pay to re-take the SAT, maybe even 4 or more times?” the answer appears to be “Yes, it does pay—it pays the College Board,” which of course charges per each test taken. Otherwise, the benefit of re-taking the SAT appears to be both minimal and provisional. That is my takeaway: please don’t attribute it to Paul! I personally agree with the line that if you want to do well on the SAT, the way to do it, is to study well the materials taught in high school.

One tiny little take-away, salient and interesting to me, was when Paul said that the SAT was a “standardized admission test.” That is new to me. Back in the old days, SAT was the abbreviation for Scholastic Aptitude Test. Then I fell out of the field; I most recently heard that the College Board was trying to get people used to the idea that “SAT” didn’t stand for anything, “Just call it the SAT.” So, the new “backronym” is interesting to me. Also, pretty accurate.

Debunking Myths About Intelligence

And at the very end of the day came one of the session’s most productive and respected presenters, Russ Warne, a distinguished researcher, author (of the popular book In the Know —Debunking 35 Myths about Human Intelligence), and creator of a new tech startup (RIOT IQ) that will be offering an IQ test based on the “freemium” model.


Russell T. Warne is an educational psychologist and is a member of the editorial board of various scholarly journals, including Intelligence. He is the chief scientist of RIOT IQ, whose Reasoning and Intelligence Online Test (RIOT) launched in 2025. The options range from a quick, free IQ test, to more robust, accurate, and paid IQ tests.


In the freemium model (common in the tech world), a company offers an entry-level product free, then charges for other products and services. I mentioned above that there is already a free online IQ test called the ICAR, so Russ is coming up with some new things. Russ came to the conference to present some anomalous data findings he’d run across during the “norming” of his new test; he wanted to get his professional colleagues’ opinions.

Russ came across an interesting finding that he wanted to run by his colleagues—it’s not easy to recount in print without all the charts and graphs, so I’ll summarize. There is a subset of people who answer vocabulary questions—that is, they define vocabulary terms—rapidly and correctly but don’t score as high on IQ tests as other test-takers. Since mastery of vocabulary is the single best predictor of IQ (yes, better than math and logic puzzles), this is/was an interesting finding. I suggested aloud to the room that Russ had found a test for shallowness. The room, and Russ, agreed that that was a possibility.

Heckman (the Nobel-prizewinning economist) threw out some ideas and observations. There were other comments, suggestions, observations: excellent ones. Coming on the same day as Claire’s talk—in which she focused heavily on the post-modernists and the demise/decline of literature and the humanities due to those fields’ embrace of the postmodern ideas/viewpoints/techniques/theory—I also wonder if those guys and gals (thinking Foucault, Derrida, Judith Butler) who are very facile, glib bullshitters, would fit into this category of Russ’s.

With that, the conference ended. As the days had passed, attrition had happened, with attendance dwindling quite a bit. The “core stalwarts” made it through to the very end. So did people like me, hungry for every scrap of knowledge we can get.

Personal and summary

Some overall observations/opinions/thoughts that weren’t part of any of the above presentations:

  • Attendees often focus more on the liveliness and entertainment proficiency of the presenter than on the material being presented. I got several compliments along the lines of “You’re a very good presenter” but less about the material. By the time you’ve sat through a couple dozen 20-minute presentations, I figure, your brain turns to mush and you’re only lightly engaging with the material being presented. One guy said to me something like “Good presentation. Oh, and thanks for the packet of tests.” (Unlike any other presenter I had prepared a packet of actual printed IQ tests and I gave them out.) So, the guy liked the presentation, he liked the nice packet of stuff. As for the substance, well, no mention of that.
  • Universities suck at least as much as corporate America, and I’m quite glad I did not go into academia. Back in the day—meaning 40 years ago or so—I asked friends who were themselves in academia, for their advice. Basically, they all told me that it’s not all it’s cracked up to be. I saw evidence of this. For example, one storied academic told me that university administrators’ attitude toward professors is “what have you done for me lately?” He recounted a story—apparently well-known in academia—of a professor who got the Nobel prize. In the days and weeks before the award announcement, his department had been cutting back on his resources and was about to take away his lab. He hadn’t produced anything lately, he wasn’t bringing in big grants, he wasn’t generating a lot of “buzz.” Their answer to the question “What have you done for me lately?” was “not much” or “not enough.” Of course, once his Nobel prize was announced, they reversed course and stopped taking away his resources.
  • Speaking of academia and academicians, there was sotto voce but extensive “wailing about fate,” meaning, the various cuts and hits universities are taking from the Trump administration. Plenty of lamenting about particular valuable programs (e.g., some weather/climate studies) that appear to be going-to-be discontinued, which will result in loss of data. But one of the most respected guys present mentioned that the universities had brought it on themselves; plus, he wasn’t sure what could’ve been done to prevent it. I asked him just what he meant by that—how had they brought it on? He replied “DEI.” He’s a complete liberal himself —he lives and teaches in California. But he recognizes the culprit.
  • Northwestern has a nice campus.
  • There are lots of rabbits (hares—not domestic/feral) on Northwestern’s campus, and I guess in Evanston generally.
  • There’s no free/on-street parking anywhere in Evanston. Fortunately, Northwestern offers a parking garage that was inexpensive.
  • The “Big 5” personality traits—besides being very Western-oriented—are also kind of facile in a way. They kind of describe/define “people you’d like to invite (or not) to a party”. Some of this I already knew—that latter part I heard from Russ Warne. But on my own I discovered that the categorization scheme actually fails even in doing that. I have to skip details here for reasons of tact but basically one attendee, variously described as “screwy,” “a nut,” and a “disgusting pig,” would do just fine on the Big 5. As the Big 5 purport to painting a fairly complete picture of a person’s personality (the measure or concept is also called the General Factor of Personality or GFP), this does seem to prove, its flaws and limits.
  • The folks who attend, make up, and power ISIR, are friendly, welcoming folk.
  • The one guy whom I’ve met and/or seen/listened to now for the last 3 conferences, and who is clearly very much the very smartest guy in the room—everyone there agrees about this—is a Christian. I would not have guessed. Talking to a guy—or rather listening to him, as he is quite voluble—with an IQ around 180, and hearing his take on Christianity, is quite an experience.

I have never before tried to summarize the results of an entire conference. When I first started, I figured I’d end up with a page or two of material. As you can see that was not the case.

I already knew that academic conferences were interesting; you hear from smart, generally interesting people—often from all over/’round the world—who are excited about their work. A small but significant smattering of attendees are simply lay people who are there largely just ’cause they like to go. I myself fell into that category when I went to my first two conferences, and to a great extent I still do belong to that group of “groupies”, “hangers-on”, amateurs—I will never have, for example, the technical/professional qualifications like other presenters. However, those very folks are overwhelmingly very encouraging. It’d be nuts of course to pretend to have any abilities you don’t—to lie, in short—’cause your deception would be detected in minutes if not moments. But if you acknowledge yourself as “just a learner” and hold yourself out to be just that, you are welcomed.

It certainly isn’t everyone’s cup of tea: spending hours upon hours, day after day, sitting there watching and listening as presenters spew out more-or-less incomprehensible facts, statistics, findings, and verbiage, most of which flies right over your head and past you with the velocity of hornets on methamphetamine in a hurricane. On the other hand, information such as this can take a decade or two to make it into college textbooks, and even longer to make it into the popular press and public consciousness. If you want the straight dope, it can be very valuable to go straight to the source. And even fun, if you’ve got the strength for it.

Any conference—like any experience—has its ups and downs, its highlights and lowlights. This one was valuable for me personally and maybe even interesting to however many readers who have read thus far. I salute you!

2 thoughts on “International Society for Intelligence Research (ISIR) 2025 Annual Conference

    1. Facinating! Thanks for sharing. I learned much! Jerry Robbins, Dean Emeritus, College of Education, Eastern Michigan University

Leave a Reply to SandyCancel reply