I used 4 AI models as co-researchers (not just coding assistants) to run real experiments on a theoretical hypothesis. Here's what I learned

March 1, 2026

Here's something that might blow your mind — using AI models not just as helpers, but as real co-researchers. Over the past year, /u/ibanborras worked with ChatGPT, Gemini Pro, Manus, and Claude Opus on a pretty wild project exploring the nature of reality. But here’s where it gets interesting — these AIs helped design experiments, suggested controls, and even judged the validity of results. Unlike typical coding help, they made decisions about methodology, pointing out pitfalls and refining the analysis. And get this — having four of them meant four perspectives, which led to more robust science. The final paper credits all four as co-authors, because their contribution wasn’t just tool-like. The big takeaway? AI models can be genuine thinking partners — pushing back, questioning assumptions, and improving the science. If this sparks your curiosity, check out the code on GitHub and the full paper on Zenodo. Honestly, I’d love to hear if anyone else has experimented with multiple AI co-researchers — hit me up!

Over the past year I worked with ChatGPT, Gemini Pro, Manus and Claude Opus on a theoretical hypothesis about the fundamental nature of reality. But this post isn't about the hypothesis itself. It's about how these AI models became essential for designing and executing the science behind it.

I'm an entrepreneur and product director, not a scientist. I had a theoretical framework that seemed logically coherent, but I needed to test it computationally. On my own, I wouldn't have known where to start. Here's where the AIs came in:

Experiment design: described the mechanism I wanted to test and the AIs helped me figure out what experiments would actually validate or break it. They proposed control variations I hadn't thought of, suggested statistical metrics I didn't know existed, and challenged my assumptions constantly.

Implementation: We built the computational model together. But unlike typical AI-assisted coding, the models weren't just writing functions. They were making decisions about methodology. "This metric won't tell you what you think it tells you. Use this one instead." That kind of input.

Peer review in real time: Having four different models meant four different perspectives. When Claude said "this result is solid" and o3 said "wait, there's a confound here," resolving those disagreements led to better science than any single model (or myself alone) could have produced.

Results: We analyzed around 200 GB of binary data across 23 iterations and multiple control variations. The findings were consistent and scientifically interesting enough to publish. The paper is on Zenodo with all four AIs credited as co-authors, because reducing their contribution to "tool" felt dishonest.

The biggest takeaway: AI models right now can function as genuine research collaborators if you treat them as such. Not as oracles, not as code monkeys, but as thinking partners you push back against and who push back against you.

All source code is on GitHub: https://github.com/iban-borras/informational-singularity-hypothesis and the full paper resulting from this work is available as a preprint on Zenodo: https://doi.org/10.5281/zenodo.18721271

Anyone else tried using multiple AI models as actual co-researchers on a single project? I'd love to hear how it went.

submitted by /u/ibanborras
[link] [comments]
Audio Transcript

Over the past year I worked with ChatGPT, Gemini Pro, Manus and Claude Opus on a theoretical hypothesis about the fundamental nature of reality. But this post isn't about the hypothesis itself. It's about how these AI models became essential for designing and executing the science behind it.

I'm an entrepreneur and product director, not a scientist. I had a theoretical framework that seemed logically coherent, but I needed to test it computationally. On my own, I wouldn't have known where to start. Here's where the AIs came in:

Experiment design: described the mechanism I wanted to test and the AIs helped me figure out what experiments would actually validate or break it. They proposed control variations I hadn't thought of, suggested statistical metrics I didn't know existed, and challenged my assumptions constantly.

Implementation: We built the computational model together. But unlike typical AI-assisted coding, the models weren't just writing functions. They were making decisions about methodology. "This metric won't tell you what you think it tells you. Use this one instead." That kind of input.

Peer review in real time: Having four different models meant four different perspectives. When Claude said "this result is solid" and o3 said "wait, there's a confound here," resolving those disagreements led to better science than any single model (or myself alone) could have produced.

Results: We analyzed around 200 GB of binary data across 23 iterations and multiple control variations. The findings were consistent and scientifically interesting enough to publish. The paper is on Zenodo with all four AIs credited as co-authors, because reducing their contribution to "tool" felt dishonest.

The biggest takeaway: AI models right now can function as genuine research collaborators if you treat them as such. Not as oracles, not as code monkeys, but as thinking partners you push back against and who push back against you.

All source code is on GitHub: https://github.com/iban-borras/informational-singularity-hypothesis and the full paper resulting from this work is available as a preprint on Zenodo: https://doi.org/10.5281/zenodo.18721271

Anyone else tried using multiple AI models as actual co-researchers on a single project? I'd love to hear how it went.

submitted by /u/ibanborras
[link] [comments]
0:00/0:00
I used 4 AI models as co-researchers (not just coding assistants) to run real experiments on a theoretical hypothesis. Here's what I learned | Speasy