URL:LOG:PASS ISSUE...
by xzin0vich - Saturday June 8, 2024 at 02:50 PM
#1
Hi breachforums,

URL:LOG:PASS is a terrible format. I saw my own old lines reposted, indexed on IntelX, and re-sold in "private dataset and cloud."

I think we need to make an anti-public dataset to avoid mass reposting of the same data in the same file without any unique line and always spending credits and money on it, and definitely stop allowing posting here.
Reply
#2
Agree but how to accomplish the goal (technically)?
Reply
#3
(Jun 12, 2024, 07:41 PM)joepa Wrote: Agree but how to accomplish the goal (technically)?

Big Dataset (put all public log ever publish for free)
Algo. For search into by send sample lines
(Use sample line to determine if its public or repost)
Reply
#4
I'm digging up this post since I also think stealer logs are out-of-control.
I know that I'm a bit late but I just got interested in stealer logs in the last few days.
I have reach a point where I can only say that this shit need to change.
The same data is being used over and over...

So I did a bit of research and found a French cybersecurity company dealing with the issue. (I later notice that @Lokie already made a thread about it so credit to him too: http://breachedmw4otc2lhx7nqe4wyxfhpvy32...ogs-Parser)

They have come up with an interesting scheme: https://blog.lexfo.fr/images/infostealer...schema.png
And they released a fully developed parser: https://github.com/lexfo/stealer-parser

So I ran it on a 730MB rar archive with 532 pieces and it took a few minutes to parse this shit.
It resulted in a 92MB output file.
The output file is in json format.
The system information parsing is having issues while it's the most important part since it could identify each pieces.

We can get rid of some useless stuff to obtain faster and smaller parsed files.

I think we could stand to gain by putting energy into this.
Reply
#5
(Jul 31, 2024, 04:36 PM)Magouilleur Wrote: I'm digging up this post since I also think stealer logs are out-of-control.
I know that I'm a bit late but I just got interested in stealer logs in the last few days.
I have reach a point where I can only say that this shit need to change.
The same data is being used over and over...

So I did a bit of research and found a French cybersecurity company dealing with the issue. (I later notice that @Lokie already made a thread about it so credit to him too: http://breachedmw4otc2lhx7nqe4wyxfhpvy32...ogs-Parser)

They have come up with an interesting scheme: https://blog.lexfo.fr/images/infostealer...schema.png
And they released a fully developed parser: https://github.com/lexfo/stealer-parser

So I ran it on a 730MB rar archive with 532 pieces and it took a few minutes to parse this shit.
It resulted in a 92MB output file.
The output file is in json format.
The system information parsing is having issues while it's the most important part since it could identify each pieces.

We can get rid of some useless stuff to obtain faster and smaller parsed files.

I think we could stand to gain by putting energy into this.

I'm also just starting out with stealer malware / logs and yeah... the amount of people reposting logs to gain some credits makes me sick. Hq post though, thanks for the github link.
[Image: signatureresized.png]
Reply
#6
(Jul 31, 2024, 05:01 PM)bs0d Wrote:
(Jul 31, 2024, 04:36 PM)Magouilleur Wrote: I'm digging up this post since I also think stealer logs are out-of-control.
I know that I'm a bit late but I just got interested in stealer logs in the last few days.
I have reach a point where I can only say that this shit need to change.
The same data is being used over and over...

So I did a bit of research and found a French cybersecurity company dealing with the issue. (I later notice that @Lokie already made a thread about it so credit to him too: http://breachedmw4otc2lhx7nqe4wyxfhpvy32...ogs-Parser)

They have come up with an interesting scheme: https://blog.lexfo.fr/images/infostealer...schema.png
And they released a fully developed parser: https://github.com/lexfo/stealer-parser

So I ran it on a 730MB rar archive with 532 pieces and it took a few minutes to parse this shit.
It resulted in a 92MB output file.
The output file is in json format.
The system information parsing is having issues while it's the most important part since it could identify each pieces.

We can get rid of some useless stuff to obtain faster and smaller parsed files.

I think we could stand to gain by putting energy into this.

I'm also just starting out with stealer malware / logs and yeah... the amount of people reposting logs to gain some credits makes me sick. Hq post though, thanks for the github link.

I'm fine-tuning the parser to do a relevant anti-public file, as suggested by @xzin0vich
I will share the progress of the fine-tuned parser so we can all discuss about suggestions and add features
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  I'M LOOKING FOR AN INTELX API orkidd 1 253 Yesterday, 05:16 PM
Last Post: orkidd
  Telegram Opsec Guide Synaptic 46 1,642 Apr 26, 2026, 12:43 PM
Last Post: 0xdarkdharma
  TOP SECRET FBI HACK BY KOMI komi 30 1,424 Apr 25, 2026, 02:59 PM
Last Post: insider100
  A collection of deepweb sites [2025] dg7ka 103 2,422 Apr 24, 2026, 07:27 PM
Last Post: mik3y1243
  NEW USERS READ - how to avoid malware on the forum Sukob 102 12,086 Apr 22, 2026, 11:34 PM
Last Post: digits

Forum Jump:


 Users browsing this forum: