👨💻 AI Agents Can't Reach Agreement, Even When Nothing Gets in Their WayResearchers from the ETH Zurich tested a fairly basic scenario: whether AI agents in a shared chat could agree on a single number from
0 to 50. In the
study, they used the models
Qwen3-8B and
Qwen3-14B, running groups of
4 to
16 agents and measuring not the correctness of answers but whether the agents could reach agreement.
The results were disappointing for supporters of autonomous AI swarms: even in this very simple task, AI agents often failed to reach a consensus. The stronger model, Qwen3-14B, performed noticeably better than Qwen3-8B, but it still frequently stalled and failed to agree within the allotted
50 rounds.
🤖 The system struggled even without deliberate sabotage. In the standard setup—where no one intentionally interfered—agents reached agreement in only
41.6% of runs, and the outcome became worse as the number of agents increased.
💻 When agents were told that a saboteur
might be present in the group, their coordination dropped sharply due to growing
"paranoia." And when the researchers actually introduced even a single disruptive agent, the chances of consensus almost vanished. The authors emphasize that the issue was not incorrect reasoning—the models simply lost the ability to coordinate with one another.
💡 That may be the key warning for anyone hoping that groups of AI agents could one day autonomously run a business. For now, they are far better at arguing indefinitely than at reaching an agreement.
@hiaimediaen
Informational material. 18+.