Best way to start building an LLM.
by DredgenSun - Wednesday November 29, 2023 at 12:57 PM
#1
So ive been looking into building my own offline chat bot, but rather than be informative like ChatGPT, I want to just fill it with stuff from my own imagination, so it becomes a bot that's literally a clone. I've been building LoRA's and such for image diffusing, and wouldn't mind getting into text manipulation and chatbots, any ideas on good resources to look into? Should Breach have it's own LLM? Big Grin
"Universal appeal is poison masquerading as medicine. Horror is not meant to be universal. It's meant to be personal, private, animal"
Reply
#2
Breached absolutely should not. It'll be filled with complete shit and the bias towards non-reality will be through the roof.

Building yourself something from a pre-existing LLM module would be a good step, I've been wanting for for a while but never get around to it.

https://huggingface.co/spaces/HuggingFac...eaderboard

I often check here to see what the best performing ones are.
Reply
#3
(Nov 29, 2023, 02:03 PM)HassaMassa Wrote: Breached absolutely should not. It'll be filled with complete shit and the bias towards non-reality will be through the roof.

Building yourself something from a pre-existing LLM module would be a good step, I've been wanting for for a while but never get around to it.

https://huggingface.co/spaces/HuggingFac...eaderboard

I often check here to see what the best performing ones are.

Thank you Hassa! (Great hearing from you again!)

As much as a Breach LLM might be reality bending.... huh... I guess that's a point in itself to create it, see where the HELL it ends up.

Essentially I'm trying to create some measure of therapy, by putting myself into an LLM, and then talking to myself, like i want to be hyper-self-aware lol
"Universal appeal is poison masquerading as medicine. Horror is not meant to be universal. It's meant to be personal, private, animal"
Reply
#4
(Nov 29, 2023, 02:06 PM)DredgenSun Wrote:
(Nov 29, 2023, 02:03 PM)HassaMassa Wrote: Breached absolutely should not. It'll be filled with complete shit and the bias towards non-reality will be through the roof.

Building yourself something from a pre-existing LLM module would be a good step, I've been wanting for for a while but never get around to it.

https://huggingface.co/spaces/HuggingFac...eaderboard

I often check here to see what the best performing ones are.

Thank you Hassa! (Great hearing from you again!)

As much as a Breach LLM might be reality bending.... huh... I guess that's a point in itself to create it, see where the HELL it ends up.

Essentially I'm trying to create some measure of therapy, by putting myself into an LLM, and then talking to myself, like i want to be hyper-self-aware lol

I guess the main issue is the training data itself. That's the hardest thing to compile. Pre-existing ones like llama will have a lot to work with already but you'll never be able to guarantee a lack of bias etc if you don't provide your own data.
Reply
#5
(Nov 29, 2023, 02:09 PM)HassaMassa Wrote:
(Nov 29, 2023, 02:06 PM)DredgenSun Wrote:
(Nov 29, 2023, 02:03 PM)HassaMassa Wrote: Breached absolutely should not. It'll be filled with complete shit and the bias towards non-reality will be through the roof.

Building yourself something from a pre-existing LLM module would be a good step, I've been wanting for for a while but never get around to it.

https://huggingface.co/spaces/HuggingFac...eaderboard

I often check here to see what the best performing ones are.

Thank you Hassa! (Great hearing from you again!)

As much as a Breach LLM might be reality bending.... huh... I guess that's a point in itself to create it, see where the HELL it ends up.

Essentially I'm trying to create some measure of therapy, by putting myself into an LLM, and then talking to myself, like i want to be hyper-self-aware lol

I guess the main issue is the training data itself. That's the hardest thing to compile. Pre-existing ones like llama will have a lot to work with already but you'll never be able to guarantee a lack of bias etc if you don't provide your own data.

Hmm, so i was thinking of loading a lot of Whatsapp messaging from my end, I've been training to a degree, and decided on a LoRa for my diffusing; rather than have an the checkpoin/model hold my face data, I put my face data into a LoRa, piped it into the workflow as an injection to interpret after the model has loaded all the prompts/reference imaging.
"Universal appeal is poison masquerading as medicine. Horror is not meant to be universal. It's meant to be personal, private, animal"
Reply
#6
(Nov 29, 2023, 02:13 PM)DredgenSun Wrote:
(Nov 29, 2023, 02:09 PM)HassaMassa Wrote:
(Nov 29, 2023, 02:06 PM)DredgenSun Wrote:
(Nov 29, 2023, 02:03 PM)HassaMassa Wrote: Breached absolutely should not. It'll be filled with complete shit and the bias towards non-reality will be through the roof.

Building yourself something from a pre-existing LLM module would be a good step, I've been wanting for for a while but never get around to it.

https://huggingface.co/spaces/HuggingFac...eaderboard

I often check here to see what the best performing ones are.

Thank you Hassa! (Great hearing from you again!)

As much as a Breach LLM might be reality bending.... huh... I guess that's a point in itself to create it, see where the HELL it ends up.

Essentially I'm trying to create some measure of therapy, by putting myself into an LLM, and then talking to myself, like i want to be hyper-self-aware lol

I guess the main issue is the training data itself. That's the hardest thing to compile. Pre-existing ones like llama will have a lot to work with already but you'll never be able to guarantee a lack of bias etc if you don't provide your own data.

Hmm, so i was thinking of loading a lot of Whatsapp messaging from my end, I've been training to a degree, and decided on a LoRa for my diffusing; rather than have an the checkpoin/model hold my face data, I put my face data into a LoRa, piped it into the workflow as an injection to interpret after the model has loaded all the prompts/reference imaging.

I don't know shit about image AI I'm afraid but I'm not convinced it's the same process. You'd need a mega load of whatsapp data and context outside of that though, or presumably it has no way of taking any of the whattsapp messages into context. 

If you wanted it to analyse that is, you could just include the whatsapp messages and the LLM could be used as a glorified file / text parser of sorts.
Reply
#7
(Nov 29, 2023, 02:15 PM)HassaMassa Wrote:
(Nov 29, 2023, 02:13 PM)DredgenSun Wrote:
(Nov 29, 2023, 02:09 PM)HassaMassa Wrote:
(Nov 29, 2023, 02:06 PM)DredgenSun Wrote:
(Nov 29, 2023, 02:03 PM)HassaMassa Wrote: Breached absolutely should not. It'll be filled with complete shit and the bias towards non-reality will be through the roof.

Building yourself something from a pre-existing LLM module would be a good step, I've been wanting for for a while but never get around to it.

https://huggingface.co/spaces/HuggingFac...eaderboard

I often check here to see what the best performing ones are.

Thank you Hassa! (Great hearing from you again!)

As much as a Breach LLM might be reality bending.... huh... I guess that's a point in itself to create it, see where the HELL it ends up.

Essentially I'm trying to create some measure of therapy, by putting myself into an LLM, and then talking to myself, like i want to be hyper-self-aware lol

I guess the main issue is the training data itself. That's the hardest thing to compile. Pre-existing ones like llama will have a lot to work with already but you'll never be able to guarantee a lack of bias etc if you don't provide your own data.

Hmm, so i was thinking of loading a lot of Whatsapp messaging from my end, I've been training to a degree, and decided on a LoRa for my diffusing; rather than have an the checkpoin/model hold my face data, I put my face data into a LoRa, piped it into the workflow as an injection to interpret after the model has loaded all the prompts/reference imaging.

I don't know shit about image AI I'm afraid but I'm not convinced it's the same process. You'd need a mega load of whatsapp data and context outside of that though, or presumably it has no way of taking any of the whattsapp messages into context. 

If you wanted it to analyse that is, you could just include the whatsapp messages and the LLM could be used as a glorified file / text parser of sorts.

Hassa, you're a flippin' genius! I'll take on board what you said, run my messages through an LLM and see what comes from that.


Probably a diagnosis, knowing my luck
"Universal appeal is poison masquerading as medicine. Horror is not meant to be universal. It's meant to be personal, private, animal"
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  NEW USERS READ - how to avoid malware on the forum Sukob 104 12,857 6 hours ago
Last Post: Elowyn
  A collection of deepweb sites [2025] dg7ka 112 3,429 Yesterday, 07:57 PM
Last Post: IsItReal
  Hacking forums and their links 2026 onionlinks 1 274 Yesterday, 05:01 PM
Last Post: phas3lock
  What is your most efficent way to gain initial access? likju 1 202 Yesterday, 04:47 PM
Last Post: phas3lock
  FREE 3 UNCENSORED HACKING LLM QaboosbinSaidAlSaid 69 2,015 Yesterday, 04:37 PM
Last Post: phas3lock

Forum Jump:


 Users browsing this forum: 1 Guest(s)