forked from capjamesg/hugging-face-papers-rss
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathhf_posts.xml
More file actions
2 lines (2 loc) · 9.71 KB
/
hf_posts.xml
File metadata and controls
2 lines (2 loc) · 9.71 KB
1
2
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Hugging Face Posts</title><link>https://huggingface.co/</link><description>This is a website scraping RSS feed for the Hugginface trending posts.</description><generator>rfeed v1.1.1</generator><docs>https://github.com/svpino/rfeed/blob/master/README.md</docs><item><title>Leaderboard of Leaderboards — A Real-Time Meta-Ranking of AI Benchmarks</title><link>https://huggingface.co/posts/mayafree/802385854425752</link><description>Leaderboard of Leaderboards — A Real-Time Meta-Ranking of AI Benchmarks MAYA-AI/all-leaderboard Hundreds of AI leaderboards exist on HuggingFace. Knowing which ones the community actually trusts has never been easy — until now. Leaderboard of Leaderboards (LoL) ranks the leaderboards themselves, using live HuggingFace trending scores and cumulative likes as the signal. No editorial curation. No manual selection. Just what the global AI research community is actually visiting and endorsing, surfaced in real time. Sort by trending to see what is capturing attention right now, or by likes to see what has built lasting credibility over time. Nine domain filters let you zero in on what matters most to your work, and every entry shows both its rank within this collection and its real-time global rank across all HuggingFace Spaces. The collection spans well-established standards like Open LLM Leaderboard, Chatbot Arena, MTEB, and BigCodeBench alongside frameworks worth watching. FINAL...</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/mayafree/802385854425752</guid></item><item><title>We collaborated with NVIDIA to teach you about Reinforcement Learning and RL environments. 💚 Learn:</title><link>https://huggingface.co/posts/danielhanchen/475818078032725</link><description>We collaborated with NVIDIA to teach you about Reinforcement Learning and RL environments. 💚 Learn: • Why RL environments matter + how to build them • When RL is better than SFT • GRPO and RL best practices • How verifiable rewards and RLVR work Blog: https://unsloth.ai/blog/rl-environments See translation</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/danielhanchen/475818078032725</guid></item><item><title>QIE-2509-Object-Remover-Bbox-v3 is a more stable version of the Qwen Image Edit visual grounding–based object removal model. The app was previously featured in HF Spaces of the Week and is now updated with the latest Bbox-v3 LoRA adapter.</title><link>https://huggingface.co/posts/prithivMLmods/435574785554177</link><description>QIE-2509-Object-Remover-Bbox-v3 is a more stable version of the Qwen Image Edit visual grounding–based object removal model. The app was previously featured in HF Spaces of the Week and is now updated with the latest Bbox-v3 LoRA adapter. 🤗 Demo: prithivMLmods/QIE-Object-Remover-Bbox 🤗 LoRA: prithivMLmods/QIE-2509-Object-Remover-Bbox-v3 🤗 Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-layout-bbox To learn more, visit the app page or the respective model pages. See translation</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/prithivMLmods/435574785554177</guid></item><item><title>The hidden gem of open-source embedding models: LCO-Embedding</title><link>https://huggingface.co/posts/marksverdhei/231999848511500</link><description>The hidden gem of open-source embedding models: LCO-Embedding for text, image AND audio! I found this model after reading the recent Massive Audio Embedding Benchmark (MAEB) paper, as it blew the other models out of the water on day zero. I've been using it personally for about a week, and searching my files by describing music, sound effects or images is both practical and entertaining. Really underrated model, would highly recommend checking it out: LCO-Embedding/LCO-Embedding-Omni-7B PS: If you're looking you run this model on llama.cpp, i've gone ahead and quantized them for you here 👉 https://huggingface.co/collections/marksverdhei/lco-embedding-omni-gguf See translation</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/marksverdhei/231999848511500</guid></item><item><title>Used by Researchers at Allen institute, Simons foundation, Yale and other top universities.. 🤗</title><link>https://huggingface.co/posts/BibbyResearch/453885408087761</link><description>Used by Researchers at Allen institute, Simons foundation, Yale and other top universities.. 🤗 Researchers are using AI to write their papers. That AI is Bibby AI. Not GPT-5. Not Claude Opus. Not whatever wrapper your institution just paid $50k for. Bibby. While the AI research community spent 2025 debating whether LLMs can handle scientific writing — actual scientists at actual top-tier institutions quietly started shipping papers with it. No press release. No hype cycle. Just results. Here's what they figured out that most people haven't: The bottleneck in research was never the ideas. It was never the experiments. It was the 3am writing sessions where good science goes to die in a Google Doc. Writer's block, LaTex Learning frustration, formatting issues, compiler errors. Bibby is built specifically for that gap. Citation-aware. Argument-aware. Knows when to hedge, when to assert, and — critically — knows not to hallucinate your methods section. The institutions adopting it aren't...</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/BibbyResearch/453885408087761</guid></item><item><title>Going for the GOLD: Qwen 3.5 40B Claude 4.5 Opus</title><link>https://huggingface.co/posts/DavidAU/413454074242640</link><description>Going for the GOLD: Qwen 3.5 40B Claude 4.5 Opus Drastically larger, with performance to match. Upgraded Jinja template too. DavidAU/Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking See translation</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/DavidAU/413454074242640</guid></item><item><title>Arcade-3B — SmolReasoner</title><link>https://huggingface.co/posts/OzTianlu/137769226400061</link><description>Arcade-3B — SmolReasoner NoesisLab/Arcade-3B Arcade-3B is a 3B instruction-following and reasoning model built on SmolLM3-3B. It is the public release from the ARCADE project at NoesisLab, which investigates the State–Constraint Orthogonality Hypothesis: standard Transformer hidden states conflate factual content and reasoning structure in the same subspace, and explicitly decoupling them improves generalization. See translation</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/OzTianlu/137769226400061</guid></item><item><title>✅ Article highlight: *Programming SI-Core* (art-60-043, v0.1)</title><link>https://huggingface.co/posts/kanaria007/172731692878443</link><description>✅ Article highlight: *Programming SI-Core* (art-60-043, v0.1) TL;DR: What do developers actually write on an SI-Core stack? This note sketches the programming model: **SIL** for goal-native code, **DPIR** as a typed decision IR, **CPU/GSPU backends** for execution, and **SIR** for structural traces. The point is to move from prompt surgery + log spelunking toward something closer to normal, testable, compilable software engineering. Read: kanaria007/agi-structural-intelligence-protocols What’s inside: • why SI-Core programming differs from “LLM wrapper microservices” • the mental model: **OBS → SIL → DPIR → backend → RML → SIR** • SIL examples, DPIR sketches, and backend execution shape • local dev loop: sandbox SIRs, si build , si test , replay, inspection • testing strategy: unit tests, structural property tests, GCS regression, Genius Replay • tooling: LSP, ETH/capability lints, timeline and what-if visualizers • migration path: from plain LLM wrappers to SI-native stacks in...</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/kanaria007/172731692878443</guid></item><item><title>We are working on the largest Dataset and Pre-trained model for Text to Speech and Speech to text for the low-resourced language called Marwari in India.</title><link>https://huggingface.co/posts/BibbyResearch/497989260118610</link><description>We are working on the largest Dataset and Pre-trained model for Text to Speech and Speech to text for the low-resourced language called Marwari in India. See translation</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/BibbyResearch/497989260118610</guid></item><item><title>Robonine just published a new article! Mechanical backlash is a common limitation in servo-driven robotic joints. In this experiment, paired Feetech STS3215 servos are used with a small opposing preload to eliminate gearbox play, significantly improving positional stability and motion precision in robotic manipulators.</title><link>https://huggingface.co/posts/branikita/696663984018544</link><description>Robonine just published a new article! Mechanical backlash is a common limitation in servo-driven robotic joints. In this experiment, paired Feetech STS3215 servos are used with a small opposing preload to eliminate gearbox play, significantly improving positional stability and motion precision in robotic manipulators. https://robonine.com/backlash-compensation-in-sts3215-servo-actuators/ See translation</description><pubDate>Sun, 15 Mar 2026 06:06:45 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/branikita/696663984018544</guid></item></channel></rss>