Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. (#38679)