Use AI to help you find answers in the forums

PAPoots

GLP-1 Apprentice
Member Since
Jun 6, 2025
Posts
93
Likes Received
114
From
Fishers/Indiana
United-States
For example.

It might save you some steps of trying to put together the full story by reading through dozens of repetitive threads.

And you might get better answers because after a while members get fatigued having to respond to the same question over and over.

Save yourself from getting the response.
"This has been answered a hundred times. Have you searched yet?"

I love the charts you can get from copilot. If you still need clarity after searching, your post will most likely be interesting for all of us and will get a better response.
 
Strongly disagree about using  any AI for medical advice. Maybe if you use Gemini's medical model, but do not trust any AI to give you correct information. It only knows what it's fed and is wrong so, so often. Do your own research and stop relying on AI to do things for you. I wish we had a thumbs down reaction. 👎
 
"feeling alert due to reduced junk/alcohol intake." 😂

AI can help to consolidate information, but take what it says with a grain of salt.
Sooo, noone here searches this forum as a source of research? I tend to have more trust in combining the thoughts of our users than I do the "Gurus" with websites that link to their own compounding pharmacies.

I look for commonalities in responses from several users before I take the word of an influencer getting paid to sell.

Take Selank and Semax, I heard a large number of people here say it is useless, but others said it was great. So I saved a chunk of change by ordering a half kit to see for myself.

I have way more faith in our users than I do reddit and influencers. Especially when I see that have hundreds of posts.

The the Gurus are sketchy. They write an article saying this and that, then on a podcast they'll say something that goes against. They'll say what they want to make money, us users are the spenders and the risk of buying bs placebos or wasting moneyv tend to make us more honest.
 
Whether I click the magnifying glass on the forums or use copilot to search the forums, why would you think AI would alter the results.
 
Take these threads copilot found me from cheaperseeker. I can agree with them that there is stinging from ghk and they lead me to further research that I needed to add some BAC to each shot to dilute it to avoid the sting. In addition to getting GLOW70 which includes the bpc157 and TB500.
 

Attachments

  • Screenshot_20251118_130001_Copilot.jpg
    Screenshot_20251118_130001_Copilot.jpg
    240.4 KB · Views: 10
AI is wrong much of the time. It also doesn't know how to tell you that it doesn't know the incorrect answer, so it has a tendency to make answers up. I see people posting AI summaries all the time and honestly, I can't stand it. If you're not willing to do the research yourself and take the time to do a thoughtful write up, please don't waste my time by making me scroll past your AI gibberish.
 
AI is wrong much of the time. It also doesn't know how to tell you that it doesn't know the incorrect answer, so it has a tendency to make answers up. I see people posting AI summaries all the time and honestly, I can't stand it. If you're not willing to do the research yourself and take the time to do a thoughtful write up, please don't waste my time by making me scroll past your AI gibberish.
100%! I absolutely hate that these asshole tech companies are shoving this AI crap down our throats, all the while disgusting datacenters are being erected, using so much energy and for what? To tell me to put glue on pizza? They can keep this slop to themselves.
 
Sooo, noone here searches this forum as a source of research? I tend to have more trust in combining the thoughts of our users than I do the "Gurus" with websites that link to their own compounding pharmacies.

I look for commonalities in responses from several users before I take the word of an influencer getting paid to sell.
Who said no one searches this forum for answers? I dare say that's the reason most of us are here. But, we have to keep in mind that we don't know what credentials anyone here has that would make them more or less a reliable source, unless they choose to disclose that information. Even then, they could be lying. Or we could misunderstand. For example, I was agreeing with you before, with one caveat and one quote I thought was funny, but your defensive reaction implies that you thought I was arguing against you.
Whether I click the magnifying glass on the forums or use copilot to search the forums, why would you think AI would alter the results.
AI often misunderstands the nuances of language and gets things wrong. I've seen it first hand. I've even seen AI contradict itself within the same conversation. This is why you should take what it tells you with a grain of salt.
 
I found all this "AI is bad at medicine" surprising.

Using Chatgpt 4 then 5 I found it incredibly useful to get answers to really difficult medication issues, dosing , interactions and which agent to chose when there are several in the same class. And to help diagnose a few difficult problems.

I did ask it in the personalisation section to answer at journal abstract level, provide links to papers as evidence of its answers and to tell me I am wrong if I state something that disagrees with the research., and having a degree in the field , I tend to use the correct medical language in queries.

I only caught it out on making stuff up on 2 or 3 occasions, and probably missed a few fake articles it made up , but out of several hundred questions I was impressed.
Overall medical knowledge was pretty close to specialist level overall, way better than your average family doctor. The research comparing doctors to chatgpt 4 is not very flattering to the doctors, and there is quite a lot of research. I found it had no trouble understanding complex questions with multiple differing illnesses interacting with a complex array of medications and questions with multiple parts, and multiple nesting questions or factors to consider.

It is hard to understand the difference from what I found to what is in this post, maybe it answers very differently if non medical terms are used? Relying solely on Chatgpt for medical advice is not ideal as it has some weak areas, and it's clinical judgement is not always as good as a doctor, and yes it can make shit up, but its medical knowledge is superior to an average doctor, and it has a lot more time to answer innumerable questions. I do not have any real experience with other models, and some are not as good on medicine but overall the most recent models are at a fairly similar level according to the research.
 
Top Bottom