
There was a recent public discussion about artificial intelligence and genealogy that left me thinking, not emotionally, but analytically, about where our field is right now.
What struck me most was not what was said, but what wasn’t.
Several audience questions centered on whether AI should be used at all, or when AI might someday be capable of returning source citations. Those questions are revealing. They suggest a community still waiting for permission, still assuming capability lies in the future, and still being told to fear a tool that many genealogists are already using responsibly and effectively.
More concerning, however, were moments when concrete ethical scenarios were raised and no clear ethical line was drawn. When real examples involve other people’s data, living individuals, or derivative work, “it depends” is not guidance. It’s abdication.
Genealogy does not function well on ambiguity alone. Our work affects real families, real identities, and real relationships. Ethical frameworks only matter if they are applied when it’s uncomfortable to do so.
Silence is data.
So are the questions we choose to answer and the ones we avoid.
What this discussion confirmed for me is something I’ve been observing for months: much of the leadership around AI in genealogy is stalled. Not because the technology is unknowable, but because decision-making feels risky. Talking is safer than teaching. Warnings are safer than methods.
The work, however, is already happening, quietly, responsibly, and outside the spotlight.
And that’s where I’ll continue to focus my energy.
