I started this experiment skeptically. My background is research-intensive writing where getting a fact wrong has professional consequences. After three months and five tools, my relationship with AI research assistance is more nuanced than either this is transformative or this is dangerous.

The Five Tools I Tested

Perplexity AI Pro for general background research. Claude Pro with document uploads for processing specific research materials. ChatGPT Plus with Browse for current information. Elicit for academic paper research and synthesis. Connected Papers for literature mapping.

The Result That Surprised Me Most: Perplexity AI

I expected Perplexity to be useful but limited. It was significantly better than expected for building initial background understanding on unfamiliar topics. For a research piece on a field I knew nothing about, Perplexity produced a sourced structured overview in about eight minutes that gave me the conceptual foundation to ask informed questions of domain experts. Time saving compared to my usual manual process: approximately three hours down to twenty minutes.

The Problem That Nearly Derailed My Work

I asked ChatGPT to help me find academic sources supporting a specific claim. It gave me four citations that looked completely real. I almost included them in a draft sent to an editor. A random spot-check revealed that one of the four citations was completely fabricated. The journal was real. The author existed. The specific paper did not exist. This experience crystallised my approach: AI tools are invaluable for understanding and synthesis. They are unreliable for specific citation generation. Every citation from an AI tool requires independent verification before use in any professional context.

ToolBest UseFact AccuracyCitation SafetyTime Savings
Perplexity AI ProBackground research sourced overviewsHighHigh live sourcingVery High
Claude plus documentsProcessing specific research materialsVery HighHigh stays in documentsVery High
ChatGPT plus BrowseCurrent developments news researchModerateLOW verify all citationsHigh
ElicitAcademic paper search and synthesisHighHigh real papers onlyHigh
Connected PapersLiterature mapping related worksVery HighVery High real papers onlyModerate
Which AI research tool is safest for academic use? +
Elicit and Connected Papers are safest because they work exclusively with real existing academic papers and cannot fabricate citations. Perplexity is also relatively safe because it cites live web sources. ChatGPT and Claude can generate plausible-sounding but fictitious citations, so always verify any specific citation they produce independently.
How should I use AI tools for research without getting into trouble? +
Use AI for: building background understanding, identifying what questions to ask, synthesising concepts, and processing documents you have already gathered. Verify independently: all specific factual claims, all citations, all statistics, all expert attributions. Never include AI-generated citations in published work without checking that the cited work actually exists and says what the AI claims it says.