URL:LOG:PASS ISSUE...
by xzin0vich - Saturday June 8, 2024 at 02:50 PM
#1
Hi breachforums,

URL:LOG:PASS is a terrible format. I saw my own old lines reposted, indexed on IntelX, and re-sold in "private dataset and cloud."

I think we need to make an anti-public dataset to avoid mass reposting of the same data in the same file without any unique line and always spending credits and money on it, and definitely stop allowing posting here.
Reply
#2
Agree but how to accomplish the goal (technically)?
Reply
#3
(Jun 12, 2024, 07:41 PM)joepa Wrote: Agree but how to accomplish the goal (technically)?

Big Dataset (put all public log ever publish for free)
Algo. For search into by send sample lines
(Use sample line to determine if its public or repost)
Reply
#4
I'm digging up this post since I also think stealer logs are out-of-control.
I know that I'm a bit late but I just got interested in stealer logs in the last few days.
I have reach a point where I can only say that this shit need to change.
The same data is being used over and over...

So I did a bit of research and found a French cybersecurity company dealing with the issue. (I later notice that @Lokie already made a thread about it so credit to him too: http://breachedmw4otc2lhx7nqe4wyxfhpvy32...ogs-Parser)

They have come up with an interesting scheme: https://blog.lexfo.fr/images/infostealer...schema.png
And they released a fully developed parser: https://github.com/lexfo/stealer-parser

So I ran it on a 730MB rar archive with 532 pieces and it took a few minutes to parse this shit.
It resulted in a 92MB output file.
The output file is in json format.
The system information parsing is having issues while it's the most important part since it could identify each pieces.

We can get rid of some useless stuff to obtain faster and smaller parsed files.

I think we could stand to gain by putting energy into this.
Reply
#5
(Jul 31, 2024, 04:36 PM)Magouilleur Wrote: I'm digging up this post since I also think stealer logs are out-of-control.
I know that I'm a bit late but I just got interested in stealer logs in the last few days.
I have reach a point where I can only say that this shit need to change.
The same data is being used over and over...

So I did a bit of research and found a French cybersecurity company dealing with the issue. (I later notice that @Lokie already made a thread about it so credit to him too: http://breachedmw4otc2lhx7nqe4wyxfhpvy32...ogs-Parser)

They have come up with an interesting scheme: https://blog.lexfo.fr/images/infostealer...schema.png
And they released a fully developed parser: https://github.com/lexfo/stealer-parser

So I ran it on a 730MB rar archive with 532 pieces and it took a few minutes to parse this shit.
It resulted in a 92MB output file.
The output file is in json format.
The system information parsing is having issues while it's the most important part since it could identify each pieces.

We can get rid of some useless stuff to obtain faster and smaller parsed files.

I think we could stand to gain by putting energy into this.

I'm also just starting out with stealer malware / logs and yeah... the amount of people reposting logs to gain some credits makes me sick. Hq post though, thanks for the github link.
[Image: signatureresized.png]
Reply
#6
(Jul 31, 2024, 05:01 PM)bs0d Wrote:
(Jul 31, 2024, 04:36 PM)Magouilleur Wrote: I'm digging up this post since I also think stealer logs are out-of-control.
I know that I'm a bit late but I just got interested in stealer logs in the last few days.
I have reach a point where I can only say that this shit need to change.
The same data is being used over and over...

So I did a bit of research and found a French cybersecurity company dealing with the issue. (I later notice that @Lokie already made a thread about it so credit to him too: http://breachedmw4otc2lhx7nqe4wyxfhpvy32...ogs-Parser)

They have come up with an interesting scheme: https://blog.lexfo.fr/images/infostealer...schema.png
And they released a fully developed parser: https://github.com/lexfo/stealer-parser

So I ran it on a 730MB rar archive with 532 pieces and it took a few minutes to parse this shit.
It resulted in a 92MB output file.
The output file is in json format.
The system information parsing is having issues while it's the most important part since it could identify each pieces.

We can get rid of some useless stuff to obtain faster and smaller parsed files.

I think we could stand to gain by putting energy into this.

I'm also just starting out with stealer malware / logs and yeah... the amount of people reposting logs to gain some credits makes me sick. Hq post though, thanks for the github link.

I'm fine-tuning the parser to do a relevant anti-public file, as suggested by @xzin0vich
I will share the progress of the fine-tuned parser so we can all discuss about suggestions and add features
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  FREE 3 UNCENSORED HACKING LLM QaboosbinSaidAlSaid 67 1,445 45 minutes ago
Last Post: thebinarymonk
  Telegram Opsec Guide Synaptic 47 1,713 48 minutes ago
Last Post: thebinarymonk
  A collection of deepweb sites [2025] dg7ka 105 2,628 50 minutes ago
Last Post: thebinarymonk
  Looking for experienced hacker 99992 0 131 Yesterday, 10:59 PM
Last Post: 99992
  I'M LOOKING FOR AN INTELX API orkidd 1 295 Apr 27, 2026, 05:16 PM
Last Post: orkidd

Forum Jump:


 Users browsing this forum: 1 Guest(s)