spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
Just another cypherpunk🧑🏼‍🎤
Member since: 2026-01-10
Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 5h

Yes, it is possible. The type of organization developing the project may vary. Companies such as DeepSeek, Meta, and similar ones provide the model architecture and weights, ready for use. That said, the process is not fully transparent: as an investor in this context, you do not know which data the model was trained on, nor for what purpose. Were training prompts used to enforce certain behaviors? To bias the model against specific ideals? This should be auditable in my opinion. Then, If fine tuning with private data is required afterward, each user providing this data should have their own secure copy of the model.

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 23h

It’s weird because same happens to me. In primal everything works but in Iris I cannot even see my reply! Maybe I did something wrong? I’m new in nostr :(

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 1d

I really hope this turns out to be true. I’m opposed to the idea that “scale is all you need”, rather, I believe that “innovation / research are all you need.” The concern I have is that the scaling strategy can still be applied to multi-pass models, which would likely outperform smaller ones. This not only increases training costs, but also makes inference more expensive due to the need for multiple actions. That said, I’m not very familiar with these types of architectures, so I’d be happy to read any material you’d recommend.

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 1d

For me, this is all fine so far. The problem is that, on the one hand, we cannot access the probabilities (or logits) associated with the chosen tokens in non open-source models. A model may be uncertain about an answer and still produce an incorrect response. It is crucial to have access to the level of confidence behind a model’s answers. Somehow, the uncertainty associated with an output needs to be quantified, and the user should be made aware of it.

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 1d

I will be always up for a discussion ;)

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 1d

I just eradicated 42 Nostr zombies using #PlebsVsZombies! ⚔️🧟‍♀️🧟‍♀️ My Zombie Score™ was 14%! What's yours? 🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟩🟩 Follow and join the hunt at: 🏹 https://plebsvszombies.cc

#PlebsVsZombies #nostr #zombiehunting
Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 1d

Gm!

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 3d

The current AI development is dominated by actors who can afford scaling, locking innovation behind capital, infrastructure, and centralized power. This leaves little room for individuals and communities who want to build competent models but cannot compete with Wall Street’s “scale is all you need” doctrine. What we need are decentralized AI systems that are built collectively, owned collectively, and designed from the ground up to ensure user privacy. Both the model architecture and the training data should be fully transparent, while the model weights could be monetized to reward contributors. This creates a transparent, community-driven free-market ecosystem, where users decide which projects to fund and support, aligning incentives with innovation.

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 3d

I'm new here. Just want to say hi to the community!👋

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 3d

An efficient, decentralized learning protocol is the next step

Just another cypherpunk🧑🏼‍🎤
Just another cypherpunk🧑🏼‍🎤 1d

Hey :) Nice to meet you.

Welcome to Just another cypherpunk🧑🏼‍🎤 spacestr profile!

About Me

I like thinking machines :) In #NOSTR since 2026.

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends