--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-3B-Instruct tags: - text-generation - peft - lora - grpo - reinforcement-learning - reasoning - corporate strategy - decision making - business strategy - business analysis - competitive analysis - strategic planning - decision support - management consulting - market analysis - business intelligence - business planning - organizational development - performance improvement - strategic thinking - executive support datasets: - Wildstash/OrgStrategy-Reasoning-1k library_name: transformers pipeline_tag: text-generation pretty_name: Business Analyst Agent (Qwen2.5-3B + LoRA, GRPO) widget: - text: "How should a startup compete against established market leaders?" parameters: max_new_tokens: 256 temperature: 0.7 - text: "Recommend a 90‑day plan to reduce B2B SaaS churn from 30% to <15%." parameters: max_new_tokens: 384 temperature: 0.7 - text: "Corporate strategy decision making for a mid‑market SaaS facing a new competitor." parameters: max_new_tokens: 384 temperature: 0.7 - text: "Competitive analysis and go to market plan for entering the EU market." parameters: max_new_tokens: 384 temperature: 0.7 inference: parameters: max_new_tokens: 512 temperature: 0.7 model-index: - name: Wildstash/business-analyst-agent results: - task: type: text-generation name: Strategy QA (internal heuristic eval) dataset: name: Wildstash/OrgStrategy-Reasoning-1k type: Wildstash/OrgStrategy-Reasoning-1k split: test metrics: - type: structured_output_compliance value: 0.95 - type: framework_accuracy value: 0.92 - type: actionability value: 0.88 --- # πŸ€– Business Analyst Agent (LoRA on Qwen2.5-3B) > AI-powered strategic business analyst trained with GRPO (Group Relative Policy Optimization) for expert-level business strategy and analysis. [![Model](https://img.shields.io/badge/πŸ€—-Model-yellow)](https://huggingface.co/Wildstash/business-analyst-agent) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) [![Python](https://img.shields.io/badge/Python-3.9+-green.svg)](https://python.org) ## 🎯 Overview The **Business Analyst Agent** is a specialized AI assistant trained on 1000+ real business strategy cases. It provides expert-level strategic analysis, actionable recommendations, and structured business insights using advanced reinforcement learning techniques. > Keywords: corporate strategy decision making, business strategy, competitive analysis, market analysis, go to market, merger and acquisition, digital transformation, business planning, organizational development, performance improvement, management consulting ### ✨ Key Features - **🎯 Strategic Framework Identification**: Automatically selects appropriate business frameworks - **πŸ” Root Cause Analysis**: Deep analysis of business problems and opportunities - **πŸ“‹ Actionable Action Plans**: Detailed plans with owners, timelines, and budgets - **πŸ“Š Organizational Impact Assessment**: Comprehensive stakeholder and resource analysis - **πŸš€ Multi-Domain Expertise**: Market entry, churn reduction, digital transformation, M&A ## πŸš€ Quick Start ### Use from Hugging Face (PEFT adapters) ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch base = "Qwen/Qwen2.5-3B-Instruct" adapter = "Wildstash/business-analyst-agent" tokenizer = AutoTokenizer.from_pretrained(base, use_fast=True) base_model = AutoModelForCausalLM.from_pretrained(base, torch_dtype=torch.bfloat16, device_map="auto") model = PeftModel.from_pretrained(base_model, adapter) prompt = "How should a startup compete against established market leaders?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Use with Hugging Face Inference API ```python import requests API_URL = "https://api-inference.huggingface.co/models/Wildstash/business-analyst-agent" headers = {"Authorization": "Bearer YOUR_HF_TOKEN"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": "A B2B SaaS company has 30% monthly churn. Recommend a strategy to reduce it to under 15%.", "parameters": {"max_new_tokens": 512, "temperature": 0.7} }) ### Optional: Merge LoRA β†’ standalone checkpoint ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch base = "Qwen/Qwen2.5-3B-Instruct" adapter = "Wildstash/business-analyst-agent" tok = AutoTokenizer.from_pretrained(base) model = AutoModelForCausalLM.from_pretrained(base, torch_dtype=torch.bfloat16) model = PeftModel.from_pretrained(model, adapter) merged = model.merge_and_unload() merged.save_pretrained("wildstash-biz-analyst-merged", safe_serialization=True) tok.save_pretrained("wildstash-biz-analyst-merged") ``` ``` ## πŸ“Š Example Output **Input**: "B2B SaaS with 30% month-3 churn despite NPS 45. Propose a 90-day plan to reduce churn to <15%." **Output**: ``` **Framework:** Systems Thinking **Root Cause Analysis:** Poor customer service responsiveness and inconsistent onboarding experience **Key Stakeholders:** - Customer Service team: 15 FTEs - Product team: 5 FTEs - Marketing team: 8 FTEs **Organizational Impact:** - Revenue impact: $2.4M annually - Customer lifetime value: $8,400 - Market position: Competitive disadvantage 1. **Cross-train support team** (Owner: Product Manager; Timeline: 6 weeks; Budget: $0.27M; Target: Response time <2 hours) 2. **Launch customer success program** (Owner: Marketing Director; Timeline: 5 weeks; Budget: $0.16M; Target: 25% engagement increase) 3. **Implement feedback loop system** (Owner: CTO; Timeline: 6 weeks; Budget: $0.15M; Target: 95% satisfaction score) ``` ## πŸŽ“ Training Details - **Base Model**: Qwen/Qwen2.5-3B-Instruct (3B parameters) - **Training Method**: LoRA + GRPO (Group Relative Policy Optimization) - **Dataset**: [Wildstash/OrgStrategy-Reasoning-1k](https://huggingface.co/datasets/Wildstash/OrgStrategy-Reasoning-1k) (1000+ business strategy cases) - **Training Framework**: TRL (Transformer Reinforcement Learning) - **LoRA Configuration**: Rank 16, Alpha 32 - **Training Duration**: 2 epochs, ~4 hours on GPU - **Cost**: ~$15 on AWS SageMaker ## πŸ“ˆ Performance Metrics (self-reported) | Metric | Value | |--------|-------| | **Inference Speed** | 1-2s per query (GPU), 30-60s (CPU) | | **Output Quality** | Structured, actionable business strategies | | **Framework Coverage** | 15+ strategic frameworks | | **Domain Coverage** | Market entry, churn reduction, digital transformation, M&A | | **Response Structure** | 95%+ compliance with XML format | ## πŸ—οΈ Architecture ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ USER INPUT β”‚ β”‚ "Help me with market entry strategy" β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Business Analyst Agent β”‚ β”‚ Qwen2.5-3B + LoRA Adapters + GRPO Training β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Structured Output β”‚ β”‚ β€’ Strategic Analysis β”‚ β”‚ β€’ Framework Identification β”‚ β”‚ β€’ Action Plan with Resources β”‚ β”‚ β€’ Impact Assessment β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ## 🎯 Use Cases ### 🏒 **Corporate Strategy** - Market entry strategies - Competitive positioning - M&A analysis and integration - Digital transformation planning ### πŸ“Š **Business Analysis** - Churn reduction strategies - Revenue optimization - Operational efficiency - Performance improvement ### πŸš€ **Startup Advisory** - Go-to-market strategies - Product-market fit analysis - Funding strategy development - Growth planning ### πŸ“ˆ **Management Consulting** - Strategic planning - Organizational development - Change management - Process optimization ## πŸ”§ Technical Specifications - **Model Size**: 3B parameters (base) + 16M parameters (LoRA) - **Memory Usage**: ~6GB GPU RAM (inference) - **Context Length**: 32K tokens - **Output Format**: Structured XML with business frameworks - **Supported Languages**: English - **Deployment**: Local, AWS SageMaker, HuggingFace Endpoints ## πŸ“š Dataset Information Trained on **Wildstash/OrgStrategy-Reasoning-1k**, a curated dataset containing: - **1000+ business strategy scenarios** - **15+ strategic frameworks** (Systems Thinking, Lean Analytics, Blue Ocean, etc.) - **Real-world case studies** from various industries - **Expert-validated responses** with structured outputs - **Diverse business contexts** (startups, enterprises, non-profits) ### πŸ”Ž Search keywords (for discoverability) - corporate strategy - decision making - business strategy - competitive analysis - market analysis - go to market - merger and acquisition - digital transformation - business planning - organizational development - performance improvement - management consulting ## πŸš€ Deployment Options ### 1. **Local Inference** (CPU/GPU) ```bash pip install transformers peft torch python -c " from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel base_model = AutoModelForCausalLM.from_pretrained('Qwen/Qwen2.5-3B-Instruct') model = PeftModel.from_pretrained(base_model, 'Wildstash/business-analyst-agent') tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen2.5-3B-Instruct') " ``` ### 2. **HuggingFace Inference Endpoints** - **Instance**: GPU Medium (~$0.60/hour) - **Setup**: 5 minutes - **Scalability**: Auto-scaling - **API**: RESTful endpoint ### 3. **AWS SageMaker** - **Instance**: ml.g5.xlarge (~$1.20/hour) - **Setup**: 30 minutes - **Scalability**: High - **Integration**: Native AWS services ## πŸŽ₯ Demo Video [Link to demo video showcasing the Business Analyst Agent] ## πŸ“Š Evaluation Results (overview) - **Framework Accuracy**: 92% (heuristic eval on internal set) - **Actionability**: 88% (expert-judged) - **Structured Output**: 95% (XML compliance) - **Business Relevance**: 90% ## 🀝 Contributing Contributions welcome! Open issues or PRs. ## πŸ“„ License Apache-2.0 ## πŸ™ Acknowledgments - **Base Model**: Qwen2.5-3B-Instruct by Alibaba Cloud - **Training Framework**: TRL by Hugging Face - **Dataset**: Wildstash/OrgStrategy-Reasoning-1k - **Built for**: AWS AI Agent Global Hackathon ## πŸ“ž Support - **Discussions**: [Hugging Face Discussions](https://huggingface.co/Wildstash/business-analyst-agent/discussions) --- **Hugging Face**: [@Wildstash](https://huggingface.co/Wildstash) **Built with ❀️ for the AWS AI Agent Global Hackathon**