
did you ever create that npub?
🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.
Editdid you ever create that npub?
https://npub.blog is a webapp for reading long-form content. it pulls articles from anyone you follow, as a feed. I haven't found another site that does such a thing so I built this. I've been polishing it, but it's obviously still very raw and untested. give it a look. send me feedback if you notice something broken. you can enter an npub or a nip05 if you want to see someone else's article feed, or just sign in with your own to read your feed. one day nostr will transparently supplant RSS feeds as the obvious way to asynchronously distribute and track long-form content. there's just some missing components along the way we have to build.
this is the answer. the nips are axioms, your apps are theorums. prove some useful stuff.
chantrells for the win
I want the privacy, autonomy, offline capability, etc. but it may well be cheaper to run something locally than pay $200 a month or more for something they can rug you on unilaterally (thinking about the weekly caps, etc) I think the small but smart concept is true enough. I see more capacity and quality coming to local models, especially as the bench of open source coding models gets deeper and better.
I am able to run models locally on my rtx3060, but that is a paltry 12gb capacity. the models I can run on that are not worth coding with.
this is a very helpful answer, thankyou. wish I could zap you!
I will #zap good answers
what is the most cost effective way to run a #LocalLLM coding model? I'd like as much capacity as possible, for instance to run something like qwen3-coder, kimi-k2, magistral, etc in their highest fidelity instantiations. I see three high level paths. buy an.. - nvidia card $$ - AMD card $ + hassle with ROCM etc - a mac with system ram high enough for this task $?$? - something else? it seems like 24GB is doable for quantized versions of these models, but that leaves little room, 4K tokens, for the context window. #asknostr #ai #llm
they might be vents, to allow air to escape the mold while it's filling, so there aren't voids among the tread detail
ΔC https://drss.io https://npub.blog building bridges from RSS => nostr and nostr => RSS. nostr seems like the ideal mechanism for distributing feeds of all types, but especially blogs and podcasts.