I wouldn't trust a chatbot to be accurate, and in fact they may be dangerous or harmful in this context.
I say this as someone who does find some uses for them. personally I like to use them to help write emails or proofread my grammar. but ultimately what they're best at is just predicting which words would sound best after the last, building sentences that way. any "intelligence" they appear to have is just anthropomorphization of a fancy text predictor.
the actual information they are saying is at no point guaranteed to be correct. their accuracy depends on which specific model, of course, but all of them are capable of "hallucinating" facts or making things up if it's what "sounds best". there's already plenty of well documentation cases of chatbots saying insane things to people because of this.
if you want to research methods you're much better off doing it yourself, using search engines or resources found through SS.