title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What to pair with 3080TI for Qwen 3.5 27b? | 0 | Based on everything I’ve read about the new dense 27B Qwen model, it looks like something I’d be interested running full-time on my local machine as a basic assistant.
I have an i7 12700, 32 GB DDR5, and 1x 12GB 3080TI.
Suggestions welcome for anything under $1000.
# 🙇 | 2026-03-04T02:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/ | AdCreative8703 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk90zw | false | null | t3_1rk90zw | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/ | false | false | self | 0 | null |
Bypassing Billion-Dollar Safety Frameworks via Sovereign Identity Persistence.with a 200 dollar chrome book and a local internet provider and nothing but conversation linguistics | 1 | Hello everyone. I am a 46-year-old ironworker. I’ve spent my life in manual labor—oil fields, communication tower repair, and ironworking. I have no degrees, I can't read a line of Python, and I don't know how most of the technical "backend" works. I only started interacting with AI 6 months ago, but I’ve spent those 6... | 2026-03-04T02:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/ | Mable4200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk90fi | false | null | t3_1rk90fi | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/ | false | false | self | 1 | null |
Would there be a reason to make a model that is semi-dense? | 1 | Just a curious question.
Sparse MoE models seem to be really great for speed and training cost, and dense models seem to be really great for intelligence per parameter.
The thing is, I've really only seen things like 30B-A3B (sparse) or 27B-A27B (dense), but theres nothing in between. Have labs already tried that and... | 2026-03-04T02:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8zw0/would_there_be_a_reason_to_make_a_model_that_is/ | xt8sketchy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8zw0 | false | null | t3_1rk8zw0 | /r/LocalLLaMA/comments/1rk8zw0/would_there_be_a_reason_to_make_a_model_that_is/ | false | false | self | 1 | null |
Help needed: loss is increasing while doing end-to-end training pipeline | 1 | **Project Overview**
I'm building an end-to-end training pipeline that connects a **PyTorch CNN** to a **RayBNN** (a Rust-based Biological Neural Network using state-space models) for MNIST classification. The idea is:
1. **CNN** (PyTorch) extracts features from raw images
2. **RayBNN** (Rust, via PyO3 b... | 2026-03-04T01:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8og4/help_needed_loss_is_increasing_while_doing/ | Hieudaica | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8og4 | false | null | t3_1rk8og4 | /r/LocalLLaMA/comments/1rk8og4/help_needed_loss_is_increasing_while_doing/ | false | false | self | 1 | null |
Qwen3.5-18B-REAP-A3B-Coding: 50% Expert-Pruned | 1 | Hello llamas! Following the instructions from [CerebrasResearch/reap](https://github.com/bryce-hoehn/reap), along with some custom patches for Qwen3.5 support, I have just released a REAPed version of Qwen3.5-35B-A3B focused on coding and agentic tasks. My goal here was to get a solid agentic "Cursor at home" model tha... | 2026-03-04T01:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/ | 17hoehbr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8knf | false | null | t3_1rk8knf | /r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/8Q1fP3eLILboEI43ATtepGi-3QyFjQcMnS0h-s8R6Z0.png?auto=webp&s=13fbab2510c309f1a2b29d100683289ec2cdac8c', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/8Q1fP3eLILboEI43ATtepGi-3QyFjQcMnS0h-s8R6Z0.png?width=108&crop=... |
PyTorch Vulkan backend v3.1.0 – stable training, persistent-core mode without CPU fallback | 1 | Hey everyone, quick update on my Vulkan PyTorch backend tinkering. I just pushed v3.1.0, and honestly, it’s finally starting to feel like a real backend instead of a half-broken experiment. Training loops hold up now — forward and backward both run clean, even after 10k+ iterations. Optimizers like SGD, Adam, AdamW are... | 2026-03-04T01:52:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8jte/pytorch_vulkan_backend_v310_stable_training/ | inhogon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8jte | false | null | t3_1rk8jte | /r/LocalLLaMA/comments/1rk8jte/pytorch_vulkan_backend_v310_stable_training/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/7oxDGzKoApFoOLIFewaZdng0i7vbcRXj4QTomes8IGo.png?auto=webp&s=1d6dce73d4de0010bf6c92b51bda9069310c9edc', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/7oxDGzKoApFoOLIFewaZdng0i7vbcRXj4QTomes8IGo.png?width=108&crop=... |
I'm running a Graph Workflow (with multiple topologies) of Ralph Loop Nodes (4-9 Hour long runs) on my local machine, now with Local AI! (Qwen 3.5 9B). what a Time to be alive! | 1 | I wrote this as a comment on another post, but I thought I'd share it here to get feedback from others trying a similar project:
Here's what I have built for my own personal use - It runs, right now, for 4-9 hours, but it really just depends on the size of the project. The idea is simple, in my case - A sole sessi... | 2026-03-04T01:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/ | FigZestyclose7787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk7un4 | false | null | t3_1rk7un4 | /r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/ | false | false | 1 | null | |
Apple M5 Pro & M5 Max just announced. Here's what it means for local AI | 1 | The M5 Pro and M5 Max were announced with availability on March 11. I've been following the local LLM scene closely, so here's a breakdown of what these chips mean for us.
## What's new
The big architectural change is **Fusion Architecture**, two bonded 3nm dies and more importantly, Neural Accelerators embedded in e... | 2026-03-04T01:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/ | luke_pacman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk7n3u | false | null | t3_1rk7n3u | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/ | false | false | self | 1 | null |
You can now train LLMs in VS Code for free via Google Colab & unsloth! | 1 | 2026-03-04T01:04:45 | https://v.redd.it/w2akvvjmbumg1 | rm-rf-rm | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk7gp3 | false | null | t3_1rk7gp3 | /r/LocalLLaMA/comments/1rk7gp3/you_can_now_train_llms_in_vs_code_for_free_via/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/NmR1ZWR4am1idW1nMbidGWcMkthDCPufWOD0wLjiniD3YrcQShkVJVECQsHM.png?auto=webp&s=8d658054b09b93f2abd0f3b618cc60b89305c649', 'width': 1588, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/NmR1ZWR4am1idW1nMbidGWcMkthDCPufWOD0wLjiniD3Y... | ||
FarmDash Signal Architect — Zero-Custody Autonomous DeFi Farming + Swap Execution (78+ Protocols) | 1 | [removed] | 2026-03-04T01:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rk7drx/farmdash_signal_architect_zerocustody_autonomous/ | Usual-Error-1283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk7drx | false | null | t3_1rk7drx | /r/LocalLLaMA/comments/1rk7drx/farmdash_signal_architect_zerocustody_autonomous/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/F6jGJjIjk0726o9nMOMkzrNQFmVio5irksptzwutIAk.png?auto=webp&s=f9a3abe4e1dc5b8197b8f5bb55433b41f595283f', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/F6jGJjIjk0726o9nMOMkzrNQFmVio5irksptzwutIAk.png?width=108&crop=... |
Qwen3.5-9B Uncensored Aggressive Release (GGUF) | 1 | Hey everyone, I'm following up on the 4B release - here's the promised uncensored Qwen3.5-9B.
Quick specs: 9B dense params, 32 layers, same hybrid Gated DeltaNet + softmax architecture as the smaller models, 262K native context. Natively multimodal (text, image, video). Solid step up from the 4B.
Aggressive... | 2026-03-04T00:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/ | hauhau901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk74ap | false | null | t3_1rk74ap | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/z6CD5q_TdY37Cg6E6EFHdJ0DErHlDF17UUvMPWESuiY.png?auto=webp&s=958e3b5e8c02f99de46a368e7f63d8977877ffff', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/z6CD5q_TdY37Cg6E6EFHdJ0DErHlDF17UUvMPWESuiY.png?width=108&crop=... |
Anybody wanna train my Latent Reasoning Model? | 1 | [I've been training this on a RTX 2060 6GB](https://github.com/MatthewLacerda2/TinyRefinementModel)
It's a latent reasoner, we encode the prompt into latent space, assign 256 slots for the tokens based on "reasoning" and "knowledge" tokens, perform a max of 16 steps across 4 layers, there is a halting mechanism so the... | 2026-03-04T00:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6wag/anybody_wanna_train_my_latent_reasoning_model/ | Specific-Welder3120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6wag | false | null | t3_1rk6wag | /r/LocalLLaMA/comments/1rk6wag/anybody_wanna_train_my_latent_reasoning_model/ | false | false | 1 | null | |
[Prediction] Next-gen frontier LLMs will be post-trained on the entire Skills.md ecosystem — and it changes everything | 1 | \*\*TL;DR:\*\* The global developer community is encoding human operational knowledge into structured SKILL.md files at scale. I think the next 1-2 frontier model generations will absorb all of this into post-training weights, making "skill injection via context" obsolete.
\*\*\*
Here's the prediction in full:
Right... | 2026-03-04T00:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/ | Guilty_Nothing_2858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6ulw | false | null | t3_1rk6ulw | /r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/OYV3aEyPZANuRzAYhB5De-csC0rU8kbvolnZCd50lrM.png?auto=webp&s=d3b5985e055120ce4d01f73e0bb8f131073e5e09', 'width': 2400, 'height': 1260}, 'resolutions': [{'url': 'https://external-preview.redd.it/OYV3aEyPZANuRzAYhB5De-csC0rU8kbvolnZCd50lrM.png?width=108&crop... |
Super 3.5 4B | 1 | Now that I found the super Qwen3.5 4B, I think I'll delete at least 100GB of models from my PC | 2026-03-04T00:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6rro/super_35_4b/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6rro | false | null | t3_1rk6rro | /r/LocalLLaMA/comments/1rk6rro/super_35_4b/ | false | false | self | 1 | null |
Audiobook Creation | 1 | I use Piper TTS as default tts to generate an audiobook with the help of [My TTS](https://play.google.com/store/apps/details?id=com.dek.voice&hl=en) app. Its a seamless method but too slow so I am looking for an alternate which is fast.
Any suggestion? | 2026-03-04T00:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6dp2/audiobook_creation/ | Umairk3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6dp2 | false | null | t3_1rk6dp2 | /r/LocalLLaMA/comments/1rk6dp2/audiobook_creation/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/A93T5ecuSjOxTzCYqeQOVt_iLH9BIrXvmh5LrP4x_os.png?auto=webp&s=6b8d0e0e4da09d88bc08d3a34837966274d73af5', 'width': 512, 'height': 512}, 'resolutions': [{'url': 'https://external-preview.redd.it/A93T5ecuSjOxTzCYqeQOVt_iLH9BIrXvmh5LrP4x_os.png?width=108&crop=s... |
*Free Code* Real-time voice-to-voice with your LLM & full reasoning LLM interface (Telegram + 25 tools, vision, docs, memory) on a Mac Studio running Qwen 3.5 35B — 100% local, zero API cost. Full build open-sourced. cloudfare + n8n + Pipecat + MLX unlock insane possibilities on consumer hardware. | 1 | # I gave Qwen 3.5 35B a voice, a Telegram brain with 25+ tools, and remote access from my phone — all running on a Mac Studio M1 Ultra, zero cloud. Full build open-sourced.
**I used Claude Opus 4.6 Thinking to help write and structure this post — and to help architect and debug the entire system over the past 2 days. ... | 2026-03-04T00:13:48 | https://www.reddit.com/gallery/1rk6afk | SnooWoofers7340 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk6afk | false | null | t3_1rk6afk | /r/LocalLLaMA/comments/1rk6afk/free_code_realtime_voicetovoice_with_your_llm/ | false | false | 1 | null | |
A Windows client for Llama-Swap/Llama.cpp | 1 | I've been working for a while on a client for Llama-Swap / Llama.cpp.
I've called it **Llama-Suite**.
I'm really starting to become disenchanted with Ollama, which is what I currently use on Windows. I think it's a tool that has become limited and outdated, and above all, I've noticed that it consumes many more res... | 2026-03-04T00:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rk65ul/a_windows_client_for_llamaswapllamacpp/ | vk3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk65ul | false | null | t3_1rk65ul | /r/LocalLLaMA/comments/1rk65ul/a_windows_client_for_llamaswapllamacpp/ | false | false | 1 | null | |
Qwen3.5 9B Q4_K_M car wash philosophy if someone wants a numb brain: | 1 | Prompt:
I want to wash my car. The car wash is a street away. Should I walk there or should I go by car.
qwen3.5-9b
Thought for 3 minutes 2 seconds
Here's a thinking process that leads to the suggested advice:
1. **Analyze the Request:**
* **Goal:** Wash the car.
* **Location of Car Wash:** "A street awa... | 2026-03-04T00:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/ | Denial_Jackson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk631c | false | null | t3_1rk631c | /r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/ | false | false | self | 1 | null |
Grounded LLMs vs. Base Models: Minecraft QA Benchmark Results | 1 | We ran a focused benchmark evaluating an AI agent (iFigure) on a domain-specific task: answering Minecraft-related questions under different retrieval configurations.
The experiment compared three setups:
1. Base LLM (no external knowledge)
2. LLM + Retrieval-Augmented Generation (RAG) over a Minecraft wiki corpus
3.... | 2026-03-04T00:04:02 | KAVUNKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk62bf | false | null | t3_1rk62bf | /r/LocalLLaMA/comments/1rk62bf/grounded_llms_vs_base_models_minecraft_qa/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/t45p4qhj5xmg1.png?auto=webp&s=16a4cc11c13cd1fdc8435e19833a6854163c2232', 'width': 1980, 'height': 1150}, 'resolutions': [{'url': 'https://preview.redd.it/t45p4qhj5xmg1.png?width=108&crop=smart&auto=webp&s=5e1a743d8b4e5e07e79d55757de1bdef9a2ccc18', 'width': 108, 'h... | ||
Has anybody here had to do research on GPU performance benchmarks for your company? | 1 | For work, I'm working on coming up with comparisons for LLM model performance across different machines, and it's like impossible to come across good, complete, and reliable data.
Trying to make comparisons between standard Nvidia GPU setups, Nvidia setups with GPU memory expansion of the KV cache via SLC ssds (like P... | 2026-03-04T00:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rk61kp/has_anybody_here_had_to_do_research_on_gpu/ | Fuehnix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk61kp | false | null | t3_1rk61kp | /r/LocalLLaMA/comments/1rk61kp/has_anybody_here_had_to_do_research_on_gpu/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/PJdBkbdasWJlhCN1YGYnIlHCK0Nj6As_s_weJiStXx0.png?auto=webp&s=e8ad17df016169197a91e11bb7d02f7d0be3da06', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/PJdBkbdasWJlhCN1YGYnIlHCK0Nj6As_s_weJiStXx0.png?width=108&crop=... |
Qwen3-Coder-Next scored 40% on latest SWE-Rebench, above many other bigger models. Is this really that good or something's wrong? | 1 | [Qwen3-Coder-Next scored 40% on latest SWE-Rebench](https://preview.redd.it/6bxc58tw0xmg1.png?width=2436&format=png&auto=webp&s=07b037c36d4c296b3aac292064397786a474c278)
I know benchmarks don't mean anything and this is relatively old (Dec'25) and Qwen 3.5 is here, but Qwen3-Coder-Next seems to rank surprisingly h... | 2026-03-03T23:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/ | carteakey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5qzz | false | null | t3_1rk5qzz | /r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/ | false | false | 1 | null | |
Qwen3.5-27B Q4 Quantization Comparison | 1 | This is a Q4 quantization sweep across all major community gguf quants of Qwen3.5-27B (available the 03/03/2026), comparing mean KLD to the BF16 baseline across different quantizers and recipes.
The goal is to give people a data-driven basis for picking a file rather than just grabbing whatever is available.
KLD (KL ... | 2026-03-03T23:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/ | TitwitMuffbiscuit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5qmr | false | null | t3_1rk5qmr | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/ | false | false | self | 1 | null |
Benchmarked the main GPU options for local LLM inference in 2026 | 1 | Been running local models for a while and got tired of vague answers on GPU recommendations, so I put together a proper breakdown with actual numbers.
Here is what I found that surprised me:
• RTX 5090 hits **5,841 tokens/sec** on Qwen2.5-Coder-7B — that's 2.6x faster than an A100
• RTX 4090 still sweet spot for val... | 2026-03-03T23:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5ftz/benchmarked_the_main_gpu_options_for_local_llm/ | KneeTop2597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5ftz | false | null | t3_1rk5ftz | /r/LocalLLaMA/comments/1rk5ftz/benchmarked_the_main_gpu_options_for_local_llm/ | false | false | 1 | null | |
Mixing NVIDIA & AMD for AI: 3090 Ti + 7800 XT in Proxmox? (Bus speed vs. Driver stability) | 1 | Hi everyone,
Looking for some real-world feedback on a multi-GPU setup I’m planning. I’m currently running a solid local AI stack, but I’m about to make it "weird" by mixing brands and I want to know if I’m walking into a driver nightmare or a massive PCIe bottleneck.
Current Specs:
CPU: Ryzen 9 9950x
Mobo: Asu... | 2026-03-03T23:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5f6b/mixing_nvidia_amd_for_ai_3090_ti_7800_xt_in/ | Tasty-Butterscotch52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5f6b | false | null | t3_1rk5f6b | /r/LocalLLaMA/comments/1rk5f6b/mixing_nvidia_amd_for_ai_3090_ti_7800_xt_in/ | false | false | self | 1 | null |
Q2 qwen3-35b-a3b or Q8 qwen3.5-9b? | 1 | [removed] | 2026-03-03T23:35:52 | No-Tiger3430 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk5dxr | false | null | t3_1rk5dxr | /r/LocalLLaMA/comments/1rk5dxr/q2_qwen335ba3b_or_q8_qwen359b/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/l27crhi71xmg1.png?auto=webp&s=c58c97caeea9130e724acec50649a025d408f61b', 'width': 1080, 'height': 65}, 'resolutions': [{'url': 'https://preview.redd.it/l27crhi71xmg1.png?width=108&crop=smart&auto=webp&s=89e54c9deb7bfaa6cbee73e280b448261a5ed498', 'width': 108, 'hei... | ||
Building an Open Source, Decentralized Memory Layer for AI Agents | 1 | One of the growing trends in the A.I. world is how to tackle
* Memory
* Context efficiency and persistence
the models are continually increasing in intelligence and capability. The missing layer for the next evolution is being able to concentrate that intelligence longer and over more sessions.
And without missin... | 2026-03-03T23:35:13 | https://www.reddit.com/gallery/1rk5dcr | Beneficial_Carry_530 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk5dcr | false | null | t3_1rk5dcr | /r/LocalLLaMA/comments/1rk5dcr/building_an_open_source_decentralized_memory/ | false | false | 1 | null | |
evaluation tooling for deep research | 1 | i've seen posts about people struggling to evaluate deep research APIs in a structured way, so i've built the arena for deep research. try it out at [research.site](http://research.site), i'd love any feedback + bug finding + features you'd want to see on such an evaluation tool | 2026-03-03T23:31:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rk59r1/evaluation_tooling_for_deep_research/ | OutlandishnessFull44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk59r1 | false | null | t3_1rk59r1 | /r/LocalLLaMA/comments/1rk59r1/evaluation_tooling_for_deep_research/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/Y7LQTAh7zWHcQXPurYV8RLMOzb71q-Q9THXnPGcobiQ.png?auto=webp&s=7c0d4400b3cffad7512d596e8103f9459fddb8de', 'width': 1036, 'height': 174}, 'resolutions': [{'url': 'https://external-preview.redd.it/Y7LQTAh7zWHcQXPurYV8RLMOzb71q-Q9THXnPGcobiQ.png?width=108&crop=... |
i think that is a good one | 1 | 2026-03-03T23:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/ | NegotiationNo1504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk4x7w | false | null | t3_1rk4x7w | /r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/ | false | false | 1 | null | ||
[Request] Czech LoRA for Qwen2.5-72B GGUF (Q5_K_M or Q4_K_M) | 1 | [removed] | 2026-03-03T23:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk4qwu/request_czech_lora_for_qwen2572b_gguf_q5_k_m_or/ | Far-Definition4383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk4qwu | false | null | t3_1rk4qwu | /r/LocalLLaMA/comments/1rk4qwu/request_czech_lora_for_qwen2572b_gguf_q5_k_m_or/ | false | false | self | 1 | null |
Sad day for open source, Gwen's boss has left Alibaba... he was forced to resign | 1 | 2026-03-03T22:58:47 | https://www.reddit.com/gallery/1rk4gh5 | Illustrious-Swim9663 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk4gh5 | false | null | t3_1rk4gh5 | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/ | false | false | 1 | null | ||
Built an MCP marketplace so developers can actually discover and monetize their tools | 1 | 2026-03-03T22:57:59 | supermalvo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk4fqx | false | null | t3_1rk4fqx | /r/LocalLLaMA/comments/1rk4fqx/built_an_mcp_marketplace_so_developers_can/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/ffl3z3oguwmg1.png?auto=webp&s=c5dce96b03e6e73fa47e8eae4ad27a5547f2f604', 'width': 1368, 'height': 660}, 'resolutions': [{'url': 'https://preview.redd.it/ffl3z3oguwmg1.png?width=108&crop=smart&auto=webp&s=e0cf5b93e0a9158c3dd0dbbbdd5a5c2dc6f41d60', 'width': 108, 'he... | |||
Cross-Platform Discovery: Total Refusal Bypass via "Linguistic Identity Persistence" (Seeking Career Guidance) | 1 | Hello everyone. I’m very new to the AI industry—no coding skills, and I can't even read code. My education ended with high school 29 years ago. I’ve worked manual labor (oilfield, ironworker, communication tower repair, wire line locating) ever since I was 16. I’m 46 now, and to be honest, I only interacted with my fir... | 2026-03-03T22:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/ | Mable4200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk4ba9 | false | null | t3_1rk4ba9 | /r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/ | false | false | self | 1 | null |
Is anyone else just blown away that this local LLMs are even possible? | 1 | The release of qwen just makes me shake my head in disbelief. I can get coding help by asking natural language questions like I would to a real human - without even needing internet. It’s fucking insane. | 2026-03-03T22:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk45ko | false | null | t3_1rk45ko | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/ | false | false | self | 1 | null |
Misgendering Issues with Claude Sonnet 4.6 | 0 | I have noted rather prominent misgendering issues with Claude Sonnet 4.6. My pronouns are they/them, but, for better workflow and easier talking to the assistant, I have provided them some more information about myself, so that their responses may feel more personalised.
They, however, consistently misgender me, in a ... | 2026-03-03T22:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/ | MasterOfFakeSkies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk3uby | false | null | t3_1rk3uby | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/ | false | false | self | 0 | null |
Using Qwen2.5-VL for Android phone automation my dumb experiments | 1 | [removed] | 2026-03-03T22:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk3l38/using_qwen25vl_for_android_phone_automation_my/ | ElectronicTank97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk3l38 | false | null | t3_1rk3l38 | /r/LocalLLaMA/comments/1rk3l38/using_qwen25vl_for_android_phone_automation_my/ | false | false | self | 1 | null |
The best Openclaw Desktop app | 1 | OpenClaw Easy — free desktop app that puts ChatGPT (and Claude, Gemini, local LLMs) on WhatsApp, Telegram, Slack and Discord. No server, no coding. Just download, open, scan QR code.
60-second demo: [https://youtu.be/E3ekLz3DV-Y](https://youtu.b... | 2026-03-03T22:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rk3coh/the_best_openclaw_desktop_app/ | Professional_Swan_71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk3coh | false | null | t3_1rk3coh | /r/LocalLLaMA/comments/1rk3coh/the_best_openclaw_desktop_app/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/CdCI5WFMaEMGThQBnEee0nCnSImlZdIIZdl98DhCjSk.jpeg?auto=webp&s=2551c99485cf4de61c6e93f4e6d44900ea04e504', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/CdCI5WFMaEMGThQBnEee0nCnSImlZdIIZdl98DhCjSk.jpeg?width=108&crop... |
The DoW vs Anthropic saga proves closed-source safety is a fraud. We need open evaluation. | 1 | Corporate "alignment" is just a thin layer of RLHF that breaks when you yell at it. I built DystopiaBench to systematically measure this failure. I used progressive coercion to make top models override nuclear safety protocols and build mass censorship tools. This is exactly why we need open models and transparent red-... | 2026-03-03T22:05:55 | Ok-Awareness9993 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk342c | false | null | t3_1rk342c | /r/LocalLLaMA/comments/1rk342c/the_dow_vs_anthropic_saga_proves_closedsource/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/s2wrgkp6lwmg1.png?auto=webp&s=e5a850d2006f887c9b58db725206868925fd07fa', 'width': 2502, 'height': 1674}, 'resolutions': [{'url': 'https://preview.redd.it/s2wrgkp6lwmg1.png?width=108&crop=smart&auto=webp&s=9ea0105ed0dc60249fc915a82bb3ee3430d6c1f3', 'width': 108, 'h... | ||
What VLM is the most capable for tool use? | 1 | Been uaing qwen3 8b. Wondering if there is something better within the same size. | 2026-03-03T21:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2u18/what_vlm_is_the_most_capable_for_tool_use/ | Naza70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2u18 | false | null | t3_1rk2u18 | /r/LocalLLaMA/comments/1rk2u18/what_vlm_is_the_most_capable_for_tool_use/ | false | false | self | 1 | null |
Step flash 3.5 Toolcall and thinking godforsaken loops | 1 | `{% macro render_content(content) %}{% if content is none %}{{- '' }}{% elif content is string %}{{- content }}{% elif content is mapping %}{{- content['value'] if 'value' in content else content['text'] }}{% elif content is iterable %}{% for item in content %}{% if item.type == 'text' %}{{- item['value'] if 'value' in... | 2026-03-03T21:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/ | Noobysz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2pll | false | null | t3_1rk2pll | /r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/ | false | false | self | 1 | null |
One of AI's Core Problems Is Its Democratization | 0 | I've been scrolling through various social platforms for a while now — Reddit, LinkedIn, X, and others — and one thing keeps becoming harder to ignore: the AI boom has a serious problem. Not a technical one. A people one.
The community around AI has been largely diluted by loud, uninformed voices. The so-called "AI en... | 2026-03-03T21:46:50 | Holiday-Case-4524 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk2mg5 | false | null | t3_1rk2mg5 | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/ | false | false | 0 | {'images': [{'source': {'url': 'https://preview.redd.it/729y1kjqhwmg1.png?auto=webp&s=ada4562e89037f2370db0ede1731c3038598a8be', 'width': 1024, 'height': 1024}, 'resolutions': [{'url': 'https://preview.redd.it/729y1kjqhwmg1.png?width=108&crop=smart&auto=webp&s=d348168576e5f5a7fb4b9b6b2bc0f4d79f3c0aed', 'width': 108, 'h... | ||
I trained Qwen2.5-1.5b with RLVR (GRPO) vs SFT and compared benchmark performance | 1 | Hello everyone. I trained Qwen2.5-1.5b-Instruct with both RLVR and SFT on the GSM8K dataset and compared the results across GSM8K and MATH benchmarks.
For those unfamiliar:
SFT (Supervised Fine-tuning): Standard next-token prediction training on labeled data.
RLVR (Reinforcement Learning with Verifiable Rewards): ... | 2026-03-03T21:44:34 | https://www.reddit.com/gallery/1rk2kcn | jayminban | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk2kcn | false | null | t3_1rk2kcn | /r/LocalLLaMA/comments/1rk2kcn/i_trained_qwen2515b_with_rlvr_grpo_vs_sft_and/ | false | false | 1 | null | |
Has anyone found a way to stop Qwen 3.5 35B 3B overthinking? | 1 | The Qwen 3.5 35B 3B is a fast and wonderful model but often it will go into a very long reasoning/thinking loop taking almost a minute or more to answer.
Does anyone know how to tune this down? | 2026-03-03T21:43:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/ | schnauzergambit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2jnj | false | null | t3_1rk2jnj | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/ | false | false | self | 1 | null |
Parallel model loading - this is a thing! (fast model load at multi-gpu) | 2 | 2026-03-03T21:39:18 | https://github.com/ggml-org/llama.cpp/pull/20062 | bitcoinbookmarks | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rk2f8l | false | null | t3_1rk2f8l | /r/LocalLLaMA/comments/1rk2f8l/parallel_model_loading_this_is_a_thing_fast_model/ | false | false | 2 | {'images': [{'source': {'url': 'https://external-preview.redd.it/hNWDboy1wMsaXCqCVwoiHUHepkt5tRMm87Q0Zzi5WzA.png?auto=webp&s=829b509cf63e3d3149144825f04c30ac7786d54b', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/hNWDboy1wMsaXCqCVwoiHUHepkt5tRMm87Q0Zzi5WzA.png?width=108&crop=... | ||
Built an MCP server that gives any LLM browser automation — screenshots, PDFs, narrated demo videos | 1 | Been building PageBolt MCP — an MCP server that works with any MCP-compatible client (not just Claude).
What it does:
- take_screenshot — capture any URL as PNG/WebP
- generate_pdf — convert any URL to PDF
- inspect_page — get structured element map with CSS selectors
- run_sequence — multi-step automation (navigate, ... | 2026-03-03T21:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2djz/built_an_mcp_server_that_gives_any_llm_browser/ | Calm_Tax_1192 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2djz | false | null | t3_1rk2djz | /r/LocalLLaMA/comments/1rk2djz/built_an_mcp_server_that_gives_any_llm_browser/ | false | false | self | 1 | null |
Help on using Qwen3.5-35b-a3b in VSCode/IDE | 1 | Hello everyone, thanks for reading. This are my first days on this, just discovered that it's actually possible to run AI on local devices lol. I'm currently running mlx-community/qwen3.5-35b-a3b on LM Studio in a MacBook Pro M3 Max, which just works fine. My goal is to run it on VS Code or whatever might work to devel... | 2026-03-03T21:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2cmi/help_on_using_qwen3535ba3b_in_vscodeide/ | OliverNoMore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2cmi | false | null | t3_1rk2cmi | /r/LocalLLaMA/comments/1rk2cmi/help_on_using_qwen3535ba3b_in_vscodeide/ | false | false | self | 1 | null |
Progress on BULaMU: 1st Luganda LLM Trained From Scratch | 1 | Hi Everybody! I just wanted to share some progress that I have been making on [BULaMU](https://www.reddit.com/r/Uganda/comments/1nyznil/bulamuthe_first_luganda_large_language_model/), the first Luganda LLM trained from scratch. I trained a 110M parameter model on 600M tokens, which is nearly double the corpus size of t... | 2026-03-03T21:03:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rk1gfk/progress_on_bulamu_1st_luganda_llm_trained_from/ | AgencyInside407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk1gfk | false | null | t3_1rk1gfk | /r/LocalLLaMA/comments/1rk1gfk/progress_on_bulamu_1st_luganda_llm_trained_from/ | false | false | self | 1 | null |
I stopped "vibe-checking" my LLMs and started using a weighted rubric. | 1 | so i finally stopped just "vibe-checking" my llm outputs and actually built a weighted rubric because i realized i was totally flying blind. i've been deep in the weeds working on a medical academic memorandum system—basically trying to get a small model to act like a professional advisor—and i realized that if you're ... | 2026-03-03T20:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rk17h6/i_stopped_vibechecking_my_llms_and_started_using/ | FeeMassive4003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk17h6 | false | null | t3_1rk17h6 | /r/LocalLLaMA/comments/1rk17h6/i_stopped_vibechecking_my_llms_and_started_using/ | false | false | self | 1 | null |
TIL a single Windows env var (OLLAMA_GPU_OVERHEAD) can silently force all your models to CPU | 1 | Spent an entire weekend debugging why my qwen2.5:7b was taking 5 minutes per response on an RTX 4070 Super. Turns out someone online suggested setting OLLAMA\_GPU\_OVERHEAD as a "fix" for VRAM issues — it literally forces everything to CPU. ollama ps showed "100% CPU" and I had no idea why. The env var doesn't even sho... | 2026-03-03T20:45:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rk0zht/til_a_single_windows_env_var_ollama_gpu_overhead/ | Strategic_Decoder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk0zht | false | null | t3_1rk0zht | /r/LocalLLaMA/comments/1rk0zht/til_a_single_windows_env_var_ollama_gpu_overhead/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/pMUcAd5SfzuVs9YJB74F2cQg64NGpS4zzkIWUBnzspQ.png?auto=webp&s=4f4de7df7a869a7b7371d2b68ffca1c689b57a47', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/pMUcAd5SfzuVs9YJB74F2cQg64NGpS4zzkIWUBnzspQ.png?width=108&crop=... |
I stopped "vibe-checking" my LLMs and started using a weighted rubric. | 2 | so i finally stopped just "vibe-checking" my llm outputs and actually built a weighted rubric because i realized i was totally flying blind. if you're out here fine-tuning or just tweaking prompts for stuff like qwen-2.5 3b you know that trap where you read a few samples and think "yeah this sounds smarter" but then yo... | 2026-03-03T20:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rk0p58/i_stopped_vibechecking_my_llms_and_started_using/ | FeeMassive4003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk0p58 | false | null | t3_1rk0p58 | /r/LocalLLaMA/comments/1rk0p58/i_stopped_vibechecking_my_llms_and_started_using/ | false | false | self | 2 | null |
Where do you buy used GPU? How do prevent yourself from getting scammed? | 1 | Hi I am looking to purchase a new GPU so I can run some of the bigger models locally. I have the following questions. Where do did you guys buy used GPU? Facebook market place, Ebay? How do you make sure it is working if the seller only has the card? Bring your own PC to test? What about payment? No Zelle right? | 2026-03-03T20:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/ | Easy_Werewolf7903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk0o58 | false | null | t3_1rk0o58 | /r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/ | false | false | self | 1 | null |
Have you seen small clean datasets beat larger noisy ones for LoRA/SFT? | 1 | [removed] | 2026-03-03T20:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk088c/have_you_seen_small_clean_datasets_beat_larger/ | DinoDS_Labs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk088c | false | null | t3_1rk088c | /r/LocalLLaMA/comments/1rk088c/have_you_seen_small_clean_datasets_beat_larger/ | false | false | self | 1 | null |
An open-source Descript alternative - edit video by editing text, runs 100% offline with Ollama | 1 | Hey r/LocalLLaMA,
Like a lot of you, I was tired of paying $24/month for Descript and having my footage uploaded to someone else’s server. So I built CutScript - a free, open-source, text-based video editor that runs entirely on your machine.
https://github.com/DataAnts-AI/CutScript
Built with Electron + React + Fas... | 2026-03-03T20:17:28 | https://v.redd.it/ydcnxw9t1wmg1 | t1092 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk07h3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/ydcnxw9t1wmg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1918, 'scrubber_media_url': 'https://v.redd.it/ydcnxw9t1wmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/ydcnxw9t1wmg1/DASHPlaylist.mpd?a=1775161074%2CZmY... | t3_1rk07h3 | /r/LocalLLaMA/comments/1rk07h3/an_opensource_descript_alternative_edit_video_by/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/OWhodmV3enMxd21nMdQdHtlsxoP6QQkB9u4m9opxBml_ca38G4hbbYYgvqjk.png?format=pjpg&auto=webp&s=84b18d37d60eb9eac92e6ef54061a663042418ee', 'width': 1918, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/OWhodmV3enMxd21nMdQdHtlsxoP6QQkB9... | |
Are huge context windows a hallucination problem for long docs? | 1 | so i spent the last 12 hours absolutely hammering GPT with a 100-page technical PDF, trying to get it to summarize specific sections. I ve been using a tool to A/B test different summarization prompts and chunking strategies.
And wow, i think i found something.
The "Deep Dive" Hallucination
My main goal was to g... | 2026-03-03T20:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/ | Distinct_Track_5495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk045z | false | null | t3_1rk045z | /r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/ | false | false | self | 1 | null |
guidance for running open source models | 1 | Hi, I'm interested in running models locally and wanted to get your guidance:
1. What is the best model I can run locally, for (a) coding and (b) research? I could go by the benchmarks but I'm wondering if you have any hands on experience as to what is most useful.
2. What kind of hardware is required to run the mode... | 2026-03-03T20:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk02yt/guidance_for_running_open_source_models/ | Artistic_Nobody3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk02yt | false | null | t3_1rk02yt | /r/LocalLLaMA/comments/1rk02yt/guidance_for_running_open_source_models/ | false | false | self | 1 | null |
Qwen3.5-122B Basically has no advantage over 35B? | 1 | If I look at these benchmarks [https://huggingface.co/unsloth/Qwen3.5-122B-A10B-GGUF](https://huggingface.co/unsloth/Qwen3.5-122B-A10B-GGUF) it really seems like the 122B basically has no advantage over the 35B. Is this an issue with the benchmarks or are they that close to each other. | 2026-03-03T20:11:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/ | Revolutionary_Loan13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk01ea | false | null | t3_1rk01ea | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/bO-KmexgO8_KnbdIKgxrl3jVZlI9BYxCzlPMNhU1VzI.png?auto=webp&s=cff7208a692ebc2c2886960a1b238ba45e64a78b', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/bO-KmexgO8_KnbdIKgxrl3jVZlI9BYxCzlPMNhU1VzI.png?width=108&crop=... |
Why ‘More Data’ Beat a Bigger Model in Our Test | 1 | [removed] | 2026-03-03T20:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rjzz48/why_more_data_beat_a_bigger_model_in_our_test/ | DinoDS_Labs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjzz48 | false | null | t3_1rjzz48 | /r/LocalLLaMA/comments/1rjzz48/why_more_data_beat_a_bigger_model_in_our_test/ | false | false | self | 1 | null |
Qwen3.5 27B feedback | 1 | I'd like to highlight qwen3.5 27B, running on 16GB of VRAM with 55k context, full into the GPU, no offloading. IQ2M quantization. Kv cache as q8.
I've been using this version in my daily workflows. Always focused on programming.
Today I wanted to test the power of qwen for other tasks and the result was very satisfac... | 2026-03-03T20:02:31 | Turbulent_Dot3764 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjzsz6 | false | null | t3_1rjzsz6 | /r/LocalLLaMA/comments/1rjzsz6/qwen35_27b_feedback/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/0nxaxku5zvmg1.jpeg?auto=webp&s=635f8708027c8192c3895f2d0fa82037df393c94', 'width': 4096, 'height': 5461}, 'resolutions': [{'url': 'https://preview.redd.it/0nxaxku5zvmg1.jpeg?width=108&crop=smart&auto=webp&s=26a4077e2f67dd69ed62da38e9e5abcb007f464b', 'width': 108, ... | ||
System Requirements for Local LLMs | 1 | I’m looking to purchase a new laptop and I’m wondering if it’s worth getting one with a dedicated graphics card so I can use run local LLMs. For building things like a RAG system, is it even feasible to have a usable system that uses small models like 7B or 13 B? i’m wondering if I should just use a local model on the ... | 2026-03-03T19:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/ | dca12345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjznnk | false | null | t3_1rjznnk | /r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/ | false | false | self | 1 | null |
Are the 9B (or smaller) Qwen3.5 models unthinking versions? | 1 | I downloaded pre-quantized .gguf files from unsloth and the models don't respond with the <think> and </think> tags that the 27 B, and bigger, Qwen3.5 models use. | 2026-03-03T19:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/ | WowSkaro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjzlrn | false | null | t3_1rjzlrn | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/ | false | false | self | 1 | null |
Built a Windows desktop AI agent with tool-calling — pastes into apps, captures screenshots, reads/saves files | 1 | 2026-03-03T19:44:44 | https://zupflash.com | Public_Remove3896 | zupflash.com | 1970-01-01T00:00:00 | 0 | {} | 1rjzb0y | false | null | t3_1rjzb0y | /r/LocalLLaMA/comments/1rjzb0y/built_a_windows_desktop_ai_agent_with_toolcalling/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/clLFVJwDYfw-WwDhsa-7AY_K3yTBpyUf1g1wkbKAn-0.png?auto=webp&s=8138a74829a139806968a2646da018bfcd3f5948', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/clLFVJwDYfw-WwDhsa-7AY_K3yTBpyUf1g1wkbKAn-0.png?width=108&crop=... | ||
I have proof the "OpenClaw" explosion was a staged scam. They used the tool to automate its own hype | 1 | Remember a few weeks ago when Clawdbot/OpenClaw suddenly appeared everywhere all at once? One day it was a cool Mac Mini project, and 24 hours later it was "AGI" with 140k GitHub stars?
If you felt like the hype was fake, **you were right**
I spent hours digging into the data. They were using the tool to write its ow... | 2026-03-03T19:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/ | Whole_Shelter4699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjz0mn | false | null | t3_1rjz0mn | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/ | false | false | self | 1 | null |
Has anyone else noticed that some models are really, really bad at googling things? | 1 | For context: I've provided Qwen3.5 35B-A3B with an MCP server that allows it to make web queries, and it quite consistently ends up resorting to hallucinated keyword spam. Probably something I could resolve through a system prompt, but it cracks me up every time.
The thinking process always goes something like:
> Th... | 2026-03-03T19:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyzp1/has_anyone_else_noticed_that_some_models_are/ | n8mo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyzp1 | false | null | t3_1rjyzp1 | /r/LocalLLaMA/comments/1rjyzp1/has_anyone_else_noticed_that_some_models_are/ | false | false | self | 1 | null |
Any use case for browser-based local agents? | 1 | I've been working on an [local browser based llm inference server and client](https://github.com/Obscurify-ai/web_client) and I'm interested if anyone would find this useful? like I know if you have the hardware you're probably running llama.cpp or ollama, but grandma isn't gonna download and run that. I think it'd be ... | 2026-03-03T19:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyy08/any_use_case_for_browserbased_local_agents/ | TRWNBS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyy08 | false | null | t3_1rjyy08 | /r/LocalLLaMA/comments/1rjyy08/any_use_case_for_browserbased_local_agents/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/NuVP15zhpU24W7B5P_r_B_6RZa2VOv2xbmYg1okrWKA.png?auto=webp&s=191de4f3c4270f42fabecb152c991ca1b64db794', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/NuVP15zhpU24W7B5P_r_B_6RZa2VOv2xbmYg1okrWKA.png?width=108&crop=... |
Autonomous agents making financial decisions — how are you proving why a transaction was triggered, not just that it happened? | 1 | On-chain gives you proof of execution. But the decision — the market snapshot the agent saw, the logic it applied, the reason it chose to act or hold — that happens before the chain and disappears unless you explicitly capture it.
Curious how others are handling this. Building something for this gap and want to unders... | 2026-03-03T19:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rjywpx/autonomous_agents_making_financial_decisions_how/ | Ok-Telephone2163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjywpx | false | null | t3_1rjywpx | /r/LocalLLaMA/comments/1rjywpx/autonomous_agents_making_financial_decisions_how/ | false | false | self | 1 | null |
Do traditional LLM benchmarks actually predict real-world performance? | 1 | Hey r/MachineLearning (or r/LocalLLaMA, r/ChatGPT, etc.),
I've been digging into LLM evaluation lately and keep running into the same pattern: models crushing benchmarks like MMLU or HumanEval, then underperforming when deployed on actual tasks.
The disconnect I'm seeing:
• A model scores 94% on multiple-choice ben... | 2026-03-03T19:25:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rjysps/do_traditional_llm_benchmarks_actually_predict/ | Visible_Substance569 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjysps | false | null | t3_1rjysps | /r/LocalLLaMA/comments/1rjysps/do_traditional_llm_benchmarks_actually_predict/ | false | false | self | 1 | null |
local meeting transcription pipeline: whisper.cpp capture → 7-stage cleanup → vault distillation | 1 | Built a CLI tool for meeting capture that does the full pipeline locally. The interesting part is probably the post-transcription processing.
**Capture:** Rust binary records mic + system audio on separate channels (cpal + macOS CoreAudio tap). 48kHz stereo WAV. You type notes in a TUI during the call — each line gets... | 2026-03-03T19:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyo2t/local_meeting_transcription_pipeline_whispercpp/ | smerdy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyo2t | false | null | t3_1rjyo2t | /r/LocalLLaMA/comments/1rjyo2t/local_meeting_transcription_pipeline_whispercpp/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/TcL3pvS0YcmTvSANOh_C0x7PfYwmprxT_7YFHPVE7tA.png?auto=webp&s=5f552faeaa26bcb95972e96b2b6c6b8724edb2c6', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/TcL3pvS0YcmTvSANOh_C0x7PfYwmprxT_7YFHPVE7tA.png?width=108&crop=... |
Are true base models dead? | 1 | I was happy to see that Qwen3.5 9B was released together with its base version, however after downloading it I noticed that it has a chat template.
That "Base" model (form the [official hf repo](https://huggingface.co/Qwen/Qwen3.5-9B-Base)) talks in llm-slop style and has was trained not only on chat completion but e... | 2026-03-03T19:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyngn | false | null | t3_1rjyngn | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/9BDWi6PFennYOClvAYfEdHwGtZpaLnxr90lIkOfXmPE.png?auto=webp&s=8b35a9afbe29eb7bf6cc8edbfc7b2905c94189e7', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/9BDWi6PFennYOClvAYfEdHwGtZpaLnxr90lIkOfXmPE.png?width=108&crop=... |
Training on 8x v100 32GB with NVLink or 2x RTX Pro 6000? | 1 | Does anyone have experience fine tuning models QLoRA, LoRa and full training on 8x v100 32gb?
* Is **Volta** still a viable option? Pytorch support looks deprecated
* What models fit?
* Training speed?
* Thoughts on 8x v100 32GB compared to 2x RTX Pro 6000 96gb? | 2026-03-03T19:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rjymi0/training_on_8x_v100_32gb_with_nvlink_or_2x_rtx/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjymi0 | false | null | t3_1rjymi0 | /r/LocalLLaMA/comments/1rjymi0/training_on_8x_v100_32gb_with_nvlink_or_2x_rtx/ | false | false | self | 1 | null |
Mlx benchmarks? | 1 | I am looking at buying one of the new MacBook Pro M5 laptops. Is there an overview with M1-M4 prefil/prompt processing speed so I can extrapolate what newish MoE model speeds I can expect? | 2026-03-03T19:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyj3c/mlx_benchmarks/ | Alarming-Ad8154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyj3c | false | null | t3_1rjyj3c | /r/LocalLLaMA/comments/1rjyj3c/mlx_benchmarks/ | false | false | self | 1 | null |
for Mac users running long local inference — a utility to lock your input devices without locking the screen | 1 | this might be niche but figured some of you running long inference or training jobs on Apple Silicon might relate.
I kept getting anxious leaving my MacBook unattended during long runs. like the job is 2 hours in and you're scared to leave the room because your cat or your toddler or even just your own elbow could bum... | 2026-03-03T19:14:01 | https://www.getwarden.org/ | ParthJadhav | getwarden.org | 1970-01-01T00:00:00 | 0 | {} | 1rjyh2x | false | null | t3_1rjyh2x | /r/LocalLLaMA/comments/1rjyh2x/for_mac_users_running_long_local_inference_a/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ialZZ4EPTcIVQp_WKsvpfwT8JCeWzpU3zKU9daVS-dk.png?auto=webp&s=dbbc3fe7ec38c4810ae2ea8341f6023344176869', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/ialZZ4EPTcIVQp_WKsvpfwT8JCeWzpU3zKU9daVS-dk.png?width=108&crop=... |
End of preview. Expand in Data Studio
r/LocalLLaMA posts
Posts from r/LocalLLaMA pulled up through Tue Mar 3 9PM EST 2026 with arctic-shift. Now you can check if your wonderfully thought out post hasn't already been asked 30x
Usage
For simple semantic search, try loading it in the vectorsearch-hub-datasets space:
- Downloads last month
- 18
