Best pep for building muscle and definition

CacaoPower

New Member
Joined
Feb 24, 2025
Messages
7
Reaction score
2
Location
Usa
Wondering where to start here. I’m taking low dose tirz. And I’m 5-10lb within goal weight. So I want to focus more on muscle growth and definition.
Bonus points for a good source recommendation.
Would like to order and get going asap! Thanks!
 
If you're going to be injecting substances into your body to promote muscle growth, steroids are the ones that will provide real effect. This is not me advocating for doing steroids, to be clear.

The closest we have at current on the peptide side are GH secretagogues, but the levels they will get you to are not the levels of IGF-1 etc. that you need to be at to see significant muscle growth.

If there were peptides that gave significant muscle growth gains, the pro bodybuilders would be all over them. And they are all over HGH - but in much larger doses than the equivalent you would get out of ipa/tesa/etc. But for the rest, any peptides they use are for other purposes - injury repair, recovery, help while cutting, etc.
 
Ipamorelin, and Sermorelin. I like SRY but they have been getting some hate.
Chat got recommended to start with CJC-1295 and Ipamorelin. I’ve read a little on Sermorelin which seems nice too.
Anything that helps with muscle recovery would be nice too. Seems like my recovery time is longer. Muscles are sore for longer.
Have not read up on SRY. Is the product still good or what’s the hate about?
Chat gpt recommended peptidesciences . Com and corepeptides.com. I think I’d like to just try one and not commit to a bulk buy.
 
Chat got recommended to start with CJC-1295 and Ipamorelin. I’ve read a little on Sermorelin which seems nice too.
Anything that helps with muscle recovery would be nice too. Seems like my recovery time is longer. Muscles are sore for longer.
Have not read up on SRY. Is the product still good or what’s the hate about?
Chat gpt recommended peptidesciences . Com and corepeptides.com. I think I’d like to just try one and not commit to a bulk buy.
Legitimate question, not trying to be mean, but why do people put any trust in the response from AI large language models? Do you actually know how they work? Have you ever noticed how regularly the top AI google result is just completely wrong?

I'm baffled at how many people talk about what Chat GPT told them and seem to have faith it will be correct. I'm not saying it isn't ever correct, but you can't put trust in it, you have to independently confrim any info. If you have to confirm anyway, what is the point in consulting it?
 
To be honest, stacking trt, retatrutide and hgh is a game changer. I tried everything else, no comparison. These are 4 months apart. Completely changed my physique.
 

Attachments

  • IMG_0481.jpeg
    IMG_0481.jpeg
    1.1 MB · Views: 24
  • IMG_0480.jpeg
    IMG_0480.jpeg
    1.8 MB · Views: 25
There is not much in the way of peptides that truly build muscle. The GH secretagogues are only able to produce a moderate amount of GH and that’s in a younger person with a healthy pituitary gland. If you cut to the chase and use GH, even then, the doses required according to the studies are astronomically high. GH and secretagogues can help make the muscles look full and prevent muscle loss while aiding in fat loss, but they don’t shine in regards to muscle gain. As one of the posters said, if you want that in a drug, then you are talking steroids at that point.
 
HGH (which is in fact a peptide) is the answer. But it does require research and bloodwork to avoid giving yourself problems, and if you overdo it you’re going to give yourself diabetes and Andre the Giant face, so do your research.
 
Last edited:
Legitimate question, not trying to be mean, but why do people put any trust in the response from AI large language models? Do you actually know how they work? Have you ever noticed how regularly the top AI google result is just completely wrong?

I'm baffled at how many people talk about what Chat GPT told them and seem to have faith it will be correct. I'm not saying it isn't ever correct, but you can't put trust in it, you have to independently confrim any info. If you have to confirm anyway, what is the point in consulting it?
It’s a start. I need a starting point. I’m not taking the recommendation at face value. I’m looking into and making a post here. Otherwise I would have placed an order already.
I guess I’m trying to find a good starting spot to learn more. Amazon books? I legit don’t know where to learn more besides asking questions. And I hate asking questions and coming off as a newbie.
 
It’s a start. I need a starting point. I’m not taking the recommendation at face value. I’m looking into and making a post here. Otherwise I would have placed an order already.
I guess I’m trying to find a good starting spot to learn more. Amazon books? I legit don’t know where to learn more besides asking questions. And I hate asking questions and coming off as a newbie.
I've aquired a ton of knowledge over the years in regards to what tools are available for enhancement of a persons physical appearance, cognitive performance etc, and I never really ask direct questions unless I am asking about anecdotal experience of particular people.

The most effective way is learning how to use google or another search engine well, when you find a good forum with knowledgeable contributors (google is getting worse and worse for hiding places like this behind pages of sponsored content), then search those websites specifically with the term
site:website.com
which will specifically search those forums.

Obviously reading studies is also great if you can get used to it. I've been doing this kind of research a long time and in my opinion consulting AI LLM's, they are so prone to hallucinations the potential is really high for setting you off on an incorrect line of enquiry, even if you do take it with a pinch of salt, it could result it lots of time wasted before you realise an assumption you took on was incorrect due to the AI.

Just my personal opinion. AI is a long way off being useful except for niche tasks like saving time in writing a peice of python code. Especially when it comes to more fringe topics like peptide research, there just isn't the vast well of data to train them properly yet with fringe topics.
 
HGH (which is in fact a peptide) is the answer. But it does require research and bloodwork to avoid giving yourself problems, and if you overdo it you’re going to give yourself diabetes and Andre the Giant face, so do your research.
It's also a controlled substance and runs similar risks to AAS from a legal standpoint.
 
It’s a start. I need a starting point. I’m not taking the recommendation at face value. I’m looking into and making a post here. Otherwise I would have placed an order already.
I guess I’m trying to find a good starting spot to learn more. Amazon books? I legit don’t know where to learn more besides asking questions. And I hate asking questions and coming off as a newbie.
Scientific studies. Google Scholar is a good way to search through the big journals pretty easily.
 
Chat got recommended to start with CJC-1295 and Ipamorelin. I’ve read a little on Sermorelin which seems nice too.
Anything that helps with muscle recovery would be nice too. Seems like my recovery time is longer. Muscles are sore for longer.
Have not read up on SRY. Is the product still good or what’s the hate about?
Chat gpt recommended peptidesciences . Com and corepeptides.com. I think I’d like to just try one and not commit to a bulk buy.
One thing to be careful of with CJC/Ipa/Tesa is that some people see fairly strong allergic reactions to them, up to and including anaphylaxis. If often starts with site level reactions and progressively gets worse as they continue use.

Muscle recovery on most people is almost always dietary. You need enough protein for repair and growth, enough glycogen to replenish the stores in the muscle, etc. Dehydration is a big deal too, and a whole lot of people are basically always slightly dehydrated.

BPC-157/TB-500 might help with muscle recovery but most BB use them more for helping with tendon overuse injuries.
 
Legitimate question, not trying to be mean, but why do people put any trust in the response from AI large language models? Do you actually know how they work? Have you ever noticed how regularly the top AI google result is just completely wrong?

I'm baffled at how many people talk about what Chat GPT told them and seem to have faith it will be correct. I'm not saying it isn't ever correct, but you can't put trust in it, you have to independently confrim any info. If you have to confirm anyway, what is the point in consulting it?
Good question. I asked ChatGPT. Before I get to their answer, I will address part of your question that I am able to answer: Why would you rely on a system when you have to confirm its answers? Depending on how it's being used, you might want to confirm part or all of the answer. However, even if you have to confirm all of the answer, that at least gives you an answer than you can then research to see whether it's correct.

As to your main question, not surprisingly, chatGPT had a positive answer:

People trust artificial intelligence (AI) for several reasons, and trust tends to grow as AI systems improve in reliability, transparency, and performance. Here are some key factors that contribute to people's trust in AI:
  1. Consistency and Objectivity: Unlike humans, AI systems don't experience emotions, fatigue, or biases in the same way, which can result in more consistent and objective decisions. For example, AI can process data and provide recommendations based on patterns it identifies without personal bias.
  2. Accuracy and Performance: In many domains, AI systems can outperform humans in tasks that require processing vast amounts of data, such as diagnosing diseases, predicting market trends, or automating complex calculations. People trust AI when it delivers results that are accurate and reliable.
  3. Automation and Efficiency: AI helps automate repetitive tasks, saving time and resources. People tend to trust AI because it can handle mundane and time-consuming tasks efficiently, allowing humans to focus on more creative or complex work.
  4. Improvement Over Time: Many AI systems improve with experience, using machine learning to become better over time. This ability to adapt and optimize based on data builds trust as people see AI improving and becoming more effective.
  5. Transparency and Accountability: As AI becomes more transparent (e.g., through explainable AI systems), people are better able to understand how decisions are made. This transparency increases trust, as users can more clearly see the reasoning behind AI-driven decisions.
  6. Safety and Regulation: With the growing concern around AI’s impact on society, governments and organizations are creating standards and regulations to ensure that AI systems are safe, ethical, and used responsibly. Knowing that there are frameworks in place to prevent harmful AI behavior helps build trust.
  7. Familiarity and Integration: Over time, AI has become increasingly integrated into daily life—through virtual assistants, recommendation systems, self-driving cars, etc. As people become more familiar with AI, they tend to trust it more, especially when it becomes a useful tool in their personal or professional lives.

Ultimately, trust in AI is a balance between its effectiveness, transparency, and the safeguards in place to ensure it’s used responsibly. However, this trust can be shaken if AI systems behave unpredictably or are used unethically, which is why continuous evaluation and improvement of AI technologies are essential.
 
Good question. I asked ChatGPT. Before I get to their answer, I will address part of your question that I am able to answer: Why would you rely on a system when you have to confirm its answers? Depending on how it's being used, you might want to confirm part or all of the answer. However, even if you have to confirm all of the answer, that at least gives you an answer than you can then research to see whether it's correct.

As to your main question, not surprisingly, chatGPT had a positive answer:

People trust artificial intelligence (AI) for several reasons, and trust tends to grow as AI systems improve in reliability, transparency, and performance. Here are some key factors that contribute to people's trust in AI:
  1. Consistency and Objectivity: Unlike humans, AI systems don't experience emotions, fatigue, or biases in the same way, which can result in more consistent and objective decisions. For example, AI can process data and provide recommendations based on patterns it identifies without personal bias.
  2. Accuracy and Performance: In many domains, AI systems can outperform humans in tasks that require processing vast amounts of data, such as diagnosing diseases, predicting market trends, or automating complex calculations. People trust AI when it delivers results that are accurate and reliable.
  3. Automation and Efficiency: AI helps automate repetitive tasks, saving time and resources. People tend to trust AI because it can handle mundane and time-consuming tasks efficiently, allowing humans to focus on more creative or complex work.
  4. Improvement Over Time: Many AI systems improve with experience, using machine learning to become better over time. This ability to adapt and optimize based on data builds trust as people see AI improving and becoming more effective.
  5. Transparency and Accountability: As AI becomes more transparent (e.g., through explainable AI systems), people are better able to understand how decisions are made. This transparency increases trust, as users can more clearly see the reasoning behind AI-driven decisions.
  6. Safety and Regulation: With the growing concern around AI’s impact on society, governments and organizations are creating standards and regulations to ensure that AI systems are safe, ethical, and used responsibly. Knowing that there are frameworks in place to prevent harmful AI behavior helps build trust.
  7. Familiarity and Integration: Over time, AI has become increasingly integrated into daily life—through virtual assistants, recommendation systems, self-driving cars, etc. As people become more familiar with AI, they tend to trust it more, especially when it becomes a useful tool in their personal or professional lives.

Ultimately, trust in AI is a balance between its effectiveness, transparency, and the safeguards in place to ensure it’s used responsibly. However, this trust can be shaken if AI systems behave unpredictably or are used unethically, which is why continuous evaluation and improvement of AI technologies are essential.

I can actually totally see why there is excitement in the progress of AI at large, especially when they are tooled towards training on very specific data set like medical diagnoses, they are absolutely extremely useful tools in this context.

What I disagree with is the utility of generalised large language models to reliably respond to any written query, on literally any subject. AI is not yet at a point where it is particularly effective at generalised LLMs accuracy, the way neural networks work just makes them poorly suited to such broad tasks, especially when the training data is wide ranging. Chat GPT has regurgitated parody articles like the onion as truth, because it doesn't know the difference. AI much more effective at highly specialised tasks, with vetted and specified datasets however.

Sure, it's a starting point I guess, if that's a workflow a person enjoys then why not. My personal experience had led me to discovering it's actually a waste of time and come to more reliable conclusions faster with general search queries.

Each to their own I guess
 
I can actually totally see why there is excitement in the progress of AI at large, especially when they are tooled towards training on very specific data set like medical diagnoses, they are absolutely extremely useful tools in this context.

What I disagree with is the utility of generalised large language models to reliably respond to any written query, on literally any subject. AI is not yet at a point where it is particularly effective at generalised LLMs accuracy, the way neural networks work just makes them poorly suited to such broad tasks, especially when the training data is wide ranging. Chat GPT has regurgitated parody articles like the onion as truth, because it doesn't know the difference. AI much more effective at highly specialised tasks, with vetted and specified datasets however.

Sure, it's a starting point I guess, if that's a workflow a person enjoys then why not. My personal experience had led me to discovering it's actually a waste of time and come to more reliable conclusions faster with general search queries.

Each to their own I guess
I read an article about a study that concluded that a particular AI program was better at diagnosing medical conditions from reading a medical file than doctors were. Curiously, when the doctors were able to use the same AI program, they did better than unassisted doctors but worse than the AI program alone.

I'm not saying that there won't be a Terminator or Matrix like world where AI runs everything, but there will be a sweet spot first where you can call the 800 number to fix your vacuum cleaner, speak to AI that tells you how to fix it, and that will work as well or better than speaking to an actual person.
 

Trending content

Forum statistics

Threads
2,543
Messages
44,800
Members
4,707
Latest member
leah1801
Back
Top