I was having this discussion with a friend: “Can AI simulate humans?”

Key asumptions:

To simulate something you must understand it.

IA is based on logic and math, IA uses patterns.

Argument timeline:

So far, we’ve used computational power beyond our own to help us understand things faster, think calculators and computers.

Thus only computational power limits our capacity to understand ourselves.

But computational power is only logical, thus, humans must be logical if to be understood by ever advancing, logical AI.

To which she said “We are so illogical we end up being logical”

Expand “We are so illogical we end up being logical”:

Maybe human logic is so complex that at first glance it seems illogical. In order to understand it, machines would need to find complex algorithms and patterns in order to simulate what we understand as illogic.


This would only be true if human thought was finite, thus we arrive at a common question in literature: “Is there truly a unique thought?” If the answer were yes, it means we have infinite thoughts, thus illogical. if the answer were no, then we have finite thoughts and thus we are logical.

To which we concluded a thought experiment (I think)

If a computer AI is given freedom to use other computers to help it compute, and it could understand itself, then the only barrier to understand ourselves is computing power. thus, we are logical.


If it couldn’t, then we are illogical.