Ashish Tanwer
ashishtanwer
AI & ML interests
None yet
Recent Activity
liked
a model
about 11 hours ago
DiffSynth-Studio/Qwen-Image-i2L
liked
a dataset
4 days ago
yutori-ai/navi-bench
liked
a model
7 days ago
zai-org/AutoGLM-Phone-9B
Organizations
RAG
DataLabelling
LLM
-
Running2.99k
AnyCoder
π2.99kGenerate code with AI
-
RunningFeatured274
Qwen2.5 Coder Artifacts
π’274Generate code from natural language prompts
-
RunningFeatured922
QwQ-32B-Preview
π922QwQ-32B-Preview
-
Running on CPU Upgrade13.7k
Open LLM Leaderboard
π13.7kTrack, rank and evaluate open LLMs and chatbots
Evals
ClassicalML
Paper and resources for Classical ML
InfraML
Agents
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 25M β’ β’ 1.21k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
google-t5/t5-base
Translation β’ 0.2B β’ Updated β’ 2.25M β’ β’ 758 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 104
DataCleaning
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 41 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 188k β’ 2.53k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 11.3k β’ 878 -
cerebras/SlimPajama-627B
Preview β’ Updated β’ 41.3k β’ 509
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 55
Diffusion
DataCrawling
Agents
RAG
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 25M β’ β’ 1.21k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
google-t5/t5-base
Translation β’ 0.2B β’ Updated β’ 2.25M β’ β’ 758 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 104
DataLabelling
DataCleaning
LLM
-
Running2.99k
AnyCoder
π2.99kGenerate code with AI
-
RunningFeatured274
Qwen2.5 Coder Artifacts
π’274Generate code from natural language prompts
-
RunningFeatured922
QwQ-32B-Preview
π922QwQ-32B-Preview
-
Running on CPU Upgrade13.7k
Open LLM Leaderboard
π13.7kTrack, rank and evaluate open LLMs and chatbots
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 41 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 188k β’ 2.53k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 11.3k β’ 878 -
cerebras/SlimPajama-627B
Preview β’ Updated β’ 41.3k β’ 509
Evals
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 55
ClassicalML
Paper and resources for Classical ML
Diffusion
InfraML