AI & ML interests

LLM, Agents, Quality, Security, Benchmarking

Recent Activity

julien-cΒ  submitted a paper about 10 hours ago
Shaping capabilities with token-level data filtering
pierljΒ  updated a dataset about 2 months ago
giskardai/phare
pierljΒ  updated a dataset about 2 months ago
giskardai/phare
View all activity

davidberenstein1957Β 
posted an update about 2 months ago
alexcombessieΒ 
updated a Space 6 months ago
pierljΒ 
in giskardai/realharm 6 months ago

RealHarm

#2 opened 6 months ago by
PhunvVi
davidberenstein1957Β 
posted an update 6 months ago
davidberenstein1957Β 
posted an update 7 months ago
view post
Post
404
🚨 LLMs recognise bias but also reproduce harmful stereotypes: an analysis of bias in leading LLMs

I've written a new entry in our series on the Giskard, BPIFrance and Google Deepmind Phare benchmark(phare.giskard.ai).

This time it covers bias: https://huggingface.co/blog/davidberenstein1957/llms-recognise-bias-but-also-produce-stereotypes

Previous entry on hallucinations: https://huggingface.co/blog/davidberenstein1957/phare-analysis-of-hallucination-in-leading-llms
  • 1 reply
Β·
davidberenstein1957Β 
posted an update 8 months ago
davidberenstein1957Β 
posted an update 9 months ago
pierljΒ 
in giskardai/phare 9 months ago

Parquet Upload

#4 opened 9 months ago by
pierlj
julien-cΒ 
posted an update 9 months ago
view post
Post
9031
BOOOOM: Today I'm dropping TINY AGENTS

the 50 lines of code Agent in Javascript πŸ”₯

I spent the last few weeks working on this, so I hope you will like it.

I've been diving into MCP (Model Context Protocol) to understand what the hype was all about.

It is fairly simple, but still quite powerful: MCP is a standard API to expose sets of Tools that can be hooked to LLMs.

But while doing that, came my second realization:

Once you have a MCP Client, an Agent is literally just a while loop on top of it. 🀯

➑️ read it exclusively on the official HF blog: https://huggingface.co/blog/tiny-agents
  • 1 reply
Β·
davidberenstein1957Β 
posted an update 9 months ago
view post
Post
2274
πŸ”₯ Announcing FLUX-Juiced: The Fastest Image Generation Endpoint (2.6x faster)!

Optimisations are widely applied and can reduce inference time, but their impact on quality often remains unclear, so we decided to challenge the status quo and create our own optimised version of FLUX.1[dev] called FLUX-juiced.

Blog: https://huggingface.co/blog/PrunaAI/flux-fastest-image-generation-endpoint
davidberenstein1957Β 
posted an update 10 months ago